mysql-utilities-1.6.4/0000755001577100752670000000000012747674052014400 5ustar pb2usercommonmysql-utilities-1.6.4/PKG-INFO0000644001577100752670000000164012747674052015476 0ustar pb2usercommonMetadata-Version: 1.1 Name: mysql-utilities Version: 1.6.4 Summary: MySQL Utilities Home-page: http://dev.mysql.com Author: Oracle Author-email: UNKNOWN License: GNU GPLv2 (with FOSS License Exception) Description: UNKNOWN Keywords: mysql db Platform: UNKNOWN Classifier: Development Status :: 3 - Alpha Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Environment :: Console Classifier: Environment :: Win32 (MS Windows) Classifier: License :: OSI Approved :: GNU General Public License (GPL) Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: Intended Audience :: Database Administrators Classifier: Operating System :: Microsoft :: Windows Classifier: Operating System :: OS Independent Classifier: Operating System :: POSIX Classifier: Topic :: Utilities Requires: distutils Provides: mysql.utilities mysql-utilities-1.6.4/README.txt0000644001577100752670000010400312747670311016066 0ustar pb2usercommonMySQL Utilities 1.6 This is a release of MySQL Utilities, the dual-license, complete database modeling, administration and development program for MySQL. For the avoidance of doubt, this particular copy of the software is released under the version 2 of the GNU General Public License. MySQL Utilities is brought to you by the MySQL team at Oracle. Copyright (c) 2010, 2016 Oracle and/or its affiliates. All rights reserved. For more information on MySQL Utilities, visit http://www.mysql.com/products/enterprise/utilities.html For more downloads and the source of MySQL Utilities, visit http://dev.mysql.com/downloads/utilities License information can be found in the LICENSE.txt file. This distribution may include materials developed by third parties. For license and attribution notices for these materials, please refer to the documentation that accompanies this distribution. A copy of the license/notices is also reproduced below. GPLv2 Disclaimer For the avoidance of doubt, except that if any license choice other than GPL or LGPL is available it will apply instead, Oracle elects to use only the General Public License version 2 (GPLv2) at this time for any software where a choice of GPL license versions is made available with the language indicating that GPLv2 or any later version may be used, or where a choice of which version of the GPL is applied is otherwise unspecified. ******************************************************************** Third-Party Component Notices ********************************************************************* %%The following software may be included in this product: Python Use of any of this software is governed by the terms of the license below: Python 2.7 license This is the official license for the Python 2.7 release: A. HISTORY OF THE SOFTWARE ========================== Python was created in the early 1990s by Guido van Rossum at Stichting Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands as a successor of a language called ABC. Guido remains Python's principal author, although it includes many contributions from others. In 1995, Guido continued his work on Python at the Corporation for National Research Initiatives (CNRI, see http://www.cnri.reston.va.us) in Reston, Virginia where he released several versions of the software. In May 2000, Guido and the Python core development team moved to BeOpen.com to form the BeOpen PythonLabs team. In October of the same year, the PythonLabs team moved to Digital Creations (now Zope Corporation, see http://www.zope.com). In 2001, the Python Software Foundation (PSF, see http://www.python.org/psf/) was formed, a non-profit organization created specifically to own Python-related Intellectual Property. Zope Corporation is a sponsoring member of the PSF. All Python releases are Open Source (see http://www.opensource.org for the Open Source Definition). Historically, most, but not all, Python releases have also been GPL-compatible; the table below summarizes the various releases. Release Derived Year Owner GPL- from compatible? (1) 0.9.0 thru 1.2 1991-1995 CWI yes 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes 1.6 1.5.2 2000 CNRI no 2.0 1.6 2000 BeOpen.com no 1.6.1 1.6 2001 CNRI yes (2) 2.1 2.0+1.6.1 2001 PSF no 2.0.1 2.0+1.6.1 2001 PSF yes 2.1.1 2.1+2.0.1 2001 PSF yes 2.2 2.1.1 2001 PSF yes 2.1.2 2.1.1 2002 PSF yes 2.1.3 2.1.2 2002 PSF yes 2.2.1 2.2 2002 PSF yes 2.2.2 2.2.1 2002 PSF yes 2.2.3 2.2.2 2003 PSF yes 2.3 2.2.2 2002-2003 PSF yes 2.3.1 2.3 2002-2003 PSF yes 2.3.2 2.3.1 2002-2003 PSF yes 2.3.3 2.3.2 2002-2003 PSF yes 2.3.4 2.3.3 2004 PSF yes 2.3.5 2.3.4 2005 PSF yes 2.4 2.3 2004 PSF yes 2.4.1 2.4 2005 PSF yes 2.4.2 2.4.1 2005 PSF yes 2.4.3 2.4.2 2006 PSF yes 2.4.4 2.4.3 2006 PSF yes 2.5 2.4 2006 PSF yes 2.5.1 2.5 2007 PSF yes 2.5.2 2.5.1 2008 PSF yes 2.5.3 2.5.2 2008 PSF yes 2.6 2.5 2008 PSF yes 2.6.1 2.6 2008 PSF yes 2.6.2 2.6.1 2009 PSF yes 2.6.3 2.6.2 2009 PSF yes 2.6.4 2.6.3 2009 PSF yes 2.6.5 2.6.4 2010 PSF yes 2.7 2.6 2010 PSF yes Footnotes: (1) GPL-compatible doesn't mean that we're distributing Python under the GPL. All Python licenses, unlike the GPL, let you distribute a modified version without making your changes open source. The GPL-compatible licenses make it possible to combine Python with other software that is released under the GPL; the others don't. (2) According to Richard Stallman, 1.6.1 is not GPL-compatible, because its license has a choice of law clause. According to CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1 is "not incompatible" with the GPL. Thanks to the many outside volunteers who have worked under Guido's direction to make these releases possible. B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON =============================================================== PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 -------------------------------------------- 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using this software ("Python") in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python. 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python, Licensee agrees to be bound by the terms and conditions of this License Agreement. BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0 ------------------------------------------- BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1 1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the Individual or Organization ("Licensee") accessing and otherwise using this software in source or binary form and its associated documentation ("the Software"). 2. Subject to the terms and conditions of this BeOpen Python License Agreement, BeOpen hereby grants Licensee a non-exclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use the Software alone or in any derivative version, provided, however, that the BeOpen Python License is retained in the Software, alone or in any derivative version prepared by Licensee. 3. BeOpen is making the Software available to Licensee on an "AS IS" basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 5. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 6. This License Agreement shall be governed by and interpreted in all respects by the law of the State of California, excluding conflict of law provisions. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between BeOpen and Licensee. This License Agreement does not grant permission to use BeOpen trademarks or trade names in a trademark sense to endorse or promote products or services of Licensee, or any third party. As an exception, the "BeOpen Python" logos available at http://www.pythonlabs.com/logos.html may be used according to the permissions granted on that web page. 7. By copying, installing or otherwise using the software, Licensee agrees to be bound by the terms and conditions of this License Agreement. CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1 --------------------------------------- 1. This LICENSE AGREEMENT is between the Corporation for National Research Initiatives, having an office at 1895 Preston White Drive, Reston, VA 20191 ("CNRI"), and the Individual or Organization ("Licensee") accessing and otherwise using Python 1.6.1 software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, CNRI hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 1.6.1 alone or in any derivative version, provided, however, that CNRI's License Agreement and CNRI's notice of copyright, i.e., "Copyright (c) 1995-2001 Corporation for National Research Initiatives; All Rights Reserved" are retained in Python 1.6.1 alone or in any derivative version prepared by Licensee. Alternately, in lieu of CNRI's License Agreement, Licensee may substitute the following text (omitting the quotes): "Python 1.6.1 is made available subject to the terms and conditions in CNRI's License Agreement. This Agreement together with Python 1.6.1 may be located on the Internet using the following unique, persistent identifier (known as a handle): 1895.22/1013. This Agreement may also be obtained from a proxy server on the Internet using the following URL: http://hdl.handle.net/1895.22/1013". 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 1.6.1 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 1.6.1. 4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. This License Agreement shall be governed by the federal intellectual property law of the United States, including without limitation the federal copyright law, and, to the extent such U.S. federal law does not apply, by the law of the Commonwealth of Virginia, excluding Virginia's conflict of law provisions. Notwithstanding the foregoing, with regard to derivative works based on Python 1.6.1 that incorporate non-separable material that was previously distributed under the GNU General Public License (GPL), the law of the Commonwealth of Virginia shall govern this License Agreement only as to issues arising under or with respect to Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between CNRI and Licensee. This License Agreement does not grant permission to use CNRI trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By clicking on the "ACCEPT" button where indicated, or by copying, installing or otherwise using Python 1.6.1, Licensee agrees to be bound by the terms and conditions of this License Agreement. ACCEPT CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2 -------------------------------------------------- Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, The Netherlands. All rights reserved. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Stichting Mathematisch Centrum or CWI not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Additional Conditions for this Windows binary build --------------------------------------------------- This program is linked with and uses Microsoft Distributable Code, copyrighted by Microsoft Corporation. The Microsoft Distributable Code includes the following files: msvcr90.dll msvcp90.dll msvcm90.dll If you further distribute programs that include the Microsoft Distributable Code, you must comply with the restrictions on distribution specified by Microsoft. In particular, you must require distributors and external end users to agree to terms that protect the Microsoft Distributable Code at least as much as Microsoft's own requirements for the Distributable Code. See Microsoft's documentation (included in its developer tools and on its website at microsoft.com) for specific details. Redistribution of the Windows binary build of the Python interpreter complies with this agreement, provided that you do not: - alter any copyright, trademark or patent notice in Microsoft's Distributable Code; - use Microsoft's trademarks in your programs' names or in a way that suggests your programs come from or are endorsed by Microsoft; - distribute Microsoft's Distributable Code to run on a platform other than Microsoft operating systems, run-time technologies or application platforms; or - include Microsoft Distributable Code in malicious, deceptive or unlawful programs. These restrictions apply only to the Microsoft Distributable Code as defined above, not to Python itself or any programs running on the Python interpreter. The redistribution of the Python interpreter and libraries is governed by the Python Software License included with this file, or by other licenses as marked. This copy of Python includes a copy of bzip2, which is licensed under the following terms: -------------------------------------------------------------------------- This program, "bzip2", the associated library "libbzip2", and all documentation, are copyright (C) 1996-2007 Julian R Seward. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 3. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 4. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Julian Seward, jseward@bzip.org bzip2/libbzip2 version 1.0.5 of 10 December 2007 -------------------------------------------------------------------------- This copy of Python includes a copy of Berkeley DB, which is licensed under the following terms: /*- * $Id: LICENSE,v 12.9 2008/02/07 17:12:17 mark Exp $ */ The following is the license that applies to this copy of the Berkeley DB software. For a license to use the Berkeley DB software under conditions other than those described here, or to purchase support for this software, please contact Oracle at berkeleydb-info_us@oracle.com. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= /* * Copyright (c) 1990,2008 Oracle. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Redistributions in any form must be accompanied by information on * how to obtain complete source code for the DB software and any * accompanying software that uses the DB software. The source code * must either be included in the distribution or be available for no * more than the cost of distribution plus a nominal fee, and must be * freely redistributable under reasonable conditions. For an * executable file, complete source code means the source code for all * modules it contains. It does not include source code for modules or * files that typically accompany the major components of the operating * system on which the executable file runs. * * THIS SOFTWARE IS PROVIDED BY ORACLE ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR * NON-INFRINGEMENT, ARE DISCLAIMED. IN NO EVENT SHALL ORACLE BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * Copyright (c) 1990, 1993, 1994, 1995 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* * Copyright (c) 1995, 1996 * The President and Fellows of Harvard University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY HARVARD AND ITS CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL HARVARD OR ITS CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= /*** * ASM: a very small and fast Java bytecode manipulation framework * Copyright (c) 2000-2005 INRIA, France Telecom * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the copyright holders nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ This copy of Python includes a copy of Tcl, which is licensed under the following terms: This software is copyrighted by the Regents of the University of California, Sun Microsystems, Inc., Scriptics Corporation, ActiveState Corporation and other parties. The following terms apply to all files associated with the software unless explicitly disclaimed in individual files. The authors hereby grant permission to use, copy, modify, distribute, and license this software and its documentation for any purpose, provided that existing copyright notices are retained in all copies and that this notice is included verbatim in any distributions. No written agreement, license, or royalty fee is required for any of the authorized uses. Modifications to this software may be copyrighted by their authors and need not follow the licensing terms described here, provided that the new terms are clearly indicated on the first page of each file where they apply. IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. GOVERNMENT USE: If you are acquiring this software on behalf of the U.S. government, the Government shall have only "Restricted Rights" in the software and related documentation as defined in the Federal Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you are acquiring the software on behalf of the Department of Defense, the software shall be classified as "Commercial Computer Software" and the Government shall have only "Restricted Rights" as defined in Clause 252.227-7013 (c) (1) of DFARs. Notwithstanding the foregoing, the authors grant the U.S. Government and others acting in its behalf permission to use and distribute the software in accordance with the terms specified in this license. This copy of Python includes a copy of Tk, which is licensed under the following terms: This software is copyrighted by the Regents of the University of California, Sun Microsystems, Inc., and other parties. The following terms apply to all files associated with the software unless explicitly disclaimed in individual files. The authors hereby grant permission to use, copy, modify, distribute, and license this software and its documentation for any purpose, provided that existing copyright notices are retained in all copies and that this notice is included verbatim in any distributions. No written agreement, license, or royalty fee is required for any of the authorized uses. Modifications to this software may be copyrighted by their authors and need not follow the licensing terms described here, provided that the new terms are clearly indicated on the first page of each file where they apply. IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. GOVERNMENT USE: If you are acquiring this software on behalf of the U.S. government, the Government shall have only "Restricted Rights" in the software and related documentation as defined in the Federal Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you are acquiring the software on behalf of the Department of Defense, the software shall be classified as "Commercial Computer Software" and the Government shall have only "Restricted Rights" as defined in Clause 252.227-7013 (c) (1) of DFARs. Notwithstanding the foregoing, the authors grant the U.S. Government and others acting in its behalf permission to use and distribute the software in accordance with the terms specified in this license. This copy of Python includes a copy of Tix, which is licensed under the following terms: Copyright (c) 1993-1999 Ioi Kim Lam. Copyright (c) 2000-2001 Tix Project Group. Copyright (c) 2004 ActiveState This software is copyrighted by the above entities and other parties. The following terms apply to all files associated with the software unless explicitly disclaimed in individual files. The authors hereby grant permission to use, copy, modify, distribute, and license this software and its documentation for any purpose, provided that existing copyright notices are retained in all copies and that this notice is included verbatim in any distributions. No written agreement, license, or royalty fee is required for any of the authorized uses. Modifications to this software may be copyrighted by their authors and need not follow the licensing terms described here, provided that the new terms are clearly indicated on the first page of each file where they apply. IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. GOVERNMENT USE: If you are acquiring this software on behalf of the U.S. government, the Government shall have only "Restricted Rights" in the software and related documentation as defined in the Federal Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you are acquiring the software on behalf of the Department of Defense, the software shall be classified as "Commercial Computer Software" and the Government shall have only "Restricted Rights" as defined in Clause 252.227-7013 (c) (1) of DFARs. Notwithstanding the foregoing, the authors grant the U.S. Government and others acting in its behalf permission to use and distribute the software in accordance with the terms specified in this license. ---------------------------------------------------------------------- Parts of this software are based on the Tcl/Tk software copyrighted by the Regents of the University of California, Sun Microsystems, Inc., and other parties. The original license terms of the Tcl/Tk software distribution is included in the file docs/license.tcltk. Parts of this software are based on the HTML Library software copyrighted by Sun Microsystems, Inc. The original license terms of the HTML Library software distribution is included in the file docs/license.html_lib. ***************************************************************** mysql-utilities-1.6.4/mysql/0000755001577100752670000000000012747674052015545 5ustar pb2usercommonmysql-utilities-1.6.4/mysql/utilities/0000755001577100752670000000000012747674052017560 5ustar pb2usercommonmysql-utilities-1.6.4/mysql/utilities/common/0000755001577100752670000000000012747674052021050 5ustar pb2usercommonmysql-utilities-1.6.4/mysql/utilities/common/pattern_matching.py0000644001577100752670000000556312747670311024754 0ustar pb2usercommon# # Copyright (c) 2012, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains auxiliary functions to handle pattern matching. """ import re # Regular expression to match a database object identifier (support backticks) REGEXP_OBJ_NAME = r'(`(?:[^`]|``)+`|\w+|\w+[\%\*]?|[\%\*])' # Regular expression to match a database object identifier with ansi quotes REGEXP_OBJ_NAME_AQ = r'("(?:[^"]|"")+"|\w+|\*)' # Regular expression to match a qualified object identifier (with multiple # parts). Example: db.obj, db or obj REGEXP_QUALIFIED_OBJ_NAME = r'{0}(?:(?:\.){0})?'.format(REGEXP_OBJ_NAME) # Same as the above but for use with ansi quotes REGEXP_QUALIFIED_OBJ_NAME_AQ = r'{0}(?:(?:\.){0})?'.format(REGEXP_OBJ_NAME_AQ) def convertSQL_LIKE2REGEXP(sql_like_pattern): """Convert a standard SQL LIKE pattern to a REGEXP pattern. Function that transforms a SQL LIKE pattern to a supported python regexp. Returns a python regular expression (i.e. regexp). sql_like_pattern[in] pattern in the SQL LIKE form to be converted. """ # Replace '_' by equivalent regexp, except when precede by '\' # (escape character) regexp = re.sub(r'(?= int(start) and int(port) <= int(end): processes.append((proginfo[4], port)) break if len(proginfo) > 2: port = proginfo[2][proginfo[2].find(":") + 1:] if port.isdigit() and \ int(port) >= int(start) and int(port) <= int(end): processes.append((proginfo[4], port)) break f_out.close() os.unlink("portlist") return processes def get_server(name, values, quiet, verbose=False): """Connect to a server and return Server instance If the name is 'master' or 'slave', the connection will be made via the Master or Slave class else a normal Server class shall be used. name[in] Name of the server. values[in] Dictionary of connection values. quiet[in] If True, do not print messages. verbose[in] Verbose value used by the returned server instances. By default False. Returns Server class instance """ from mysql.utilities.common.replication import Master, Slave server_conn = None # Try to connect to the MySQL database server. if not quiet: _print_connection(name, values) server_options = { 'conn_info': values, 'role': name, 'verbose': verbose, } if name.lower() == 'master': server_conn = Master(server_options) elif name.lower() == 'slave': server_conn = Slave(server_options) else: server_conn = Server(server_options) try: server_conn.connect() except: if not quiet: print("") raise return server_conn def _require_version(server, version): """Check version of server server[in] Server instance version[in] minimal version of the server required Returns boolean - True = version Ok, False = version < required """ if version is not None and server is not None: major, minor, rel = version.split(".") if not server.check_version_compat(major, minor, rel): return False return True def get_server_state(server, host, pingtime=3, verbose=False): """Return the state of the server. This method returns one of the following states based on the criteria shown. UP - server is connected WARN - server is not connected but can be pinged DOWN - server cannot be pinged nor is connected server[in] Server class instance host[in] host name to ping if server is not connected pingtime[in] timeout in seconds for ping operation Default = 3 seconds verbose[in] if True, show ping status messages Default = False Returns string - state """ if verbose: print "# Attempting to contact %s ..." % host, if server is not None and server.is_alive(): if verbose: print "Success" return "UP" elif ping_host(host, pingtime): if verbose: print "Server is reachable" return "WARN" if verbose: print "FAIL" return "DOWN" def connect_servers(src_val, dest_val, options=None): """Connect to a source and destination server. This method takes two groups of --server=user:password@host:port:socket values and attempts to connect one as a source connection and the other as the destination connection. If the source and destination are the same server and the unique parameter is False, destination is set to None. The method accepts one of the following types for the src_val and dest_val: - dictionary containing connection information including: (user, passwd, host, port, socket) - connection string in the form: user:pass@host:port:socket or login-path:port:socket or config-path[group] - an instance of the Server class src_val[in] source connection information dest_val[in] destination connection information options[in] options to control behavior: quiet do not print any information during the operation (default is False) version if specified (default is None), perform version checking and fail if server version is < version specified - an exception is raised src_name name to use for source server (default is "Source") dest_name name to use for destination server (default is "Destination") unique if True, servers must be different when dest_val is not None (default is False) verbose Verbose value used by the returned server instances (default is False). Returns tuple (source, destination) where source = connection to source server destination = connection to destination server (set to None) if source and destination are same server if error, returns (None, None) """ if options is None: options = {} quiet = options.get("quiet", False) src_name = options.get("src_name", "Source") dest_name = options.get("dest_name", "Destination") version = options.get("version", None) charset = options.get("charset", None) verbose = options.get('verbose', False) ssl_dict = {} if options.get("ssl_cert", None) is not None: ssl_dict['ssl_cert'] = options.get("ssl_cert") if options.get("ssl_ca", None) is not None: ssl_dict['ssl_ca'] = options.get("ssl_ca", None) if options.get("ssl_key", None) is not None: ssl_dict['ssl_key'] = options.get("ssl_key", None) if options.get("ssl", None) is not None: ssl_dict['ssl'] = options.get("ssl", None) source = None destination = None # Get connection dictionaries src_dict = get_connection_dictionary(src_val, ssl_dict) if "]" in src_dict['host']: src_dict['host'] = clean_IPv6(src_dict['host']) dest_dict = get_connection_dictionary(dest_val) if dest_dict and "]" in dest_dict['host']: dest_dict['host'] = clean_IPv6(dest_dict['host']) # Add character set if src_dict and charset: src_dict["charset"] = charset if dest_dict and charset: dest_dict["charset"] = charset # Check for uniqueness - dictionary if options.get("unique", False) and dest_dict is not None: dupes = False if "unix_socket" in src_dict and "unix_socket" in dest_dict: dupes = (src_dict["unix_socket"] == dest_dict["unix_socket"]) else: dupes = (src_dict["port"] == dest_dict["port"]) and \ (src_dict["host"] == dest_dict["host"]) if dupes: raise UtilError("You must specify two different servers " "for the operation.") # If we're cloning so use same server for faster copy cloning = dest_dict is None or (src_dict == dest_dict) # Connect to the source server and check version if isinstance(src_val, Server): source = src_val else: source = get_server(src_name, src_dict, quiet, verbose=verbose) if not quiet: print "connected." if not _require_version(source, version): raise UtilError("The %s version is incompatible. Utility " "requires version %s or higher." % (src_name, version)) # If not cloning, connect to the destination server and check version if not cloning: if isinstance(dest_val, Server): destination = dest_val else: destination = get_server(dest_name, dest_dict, quiet, verbose=verbose) if not quiet: print "connected." if not _require_version(destination, version): raise UtilError("The %s version is incompatible. Utility " "requires version %s or higher." % (dest_name, version)) elif not quiet and dest_dict is not None and \ not isinstance(dest_val, Server): try: _print_connection(dest_name, src_dict) print "connected." except: print("") raise return (source, destination) def test_connect(conn_info, throw_errors=False, ssl_dict=None): """Test connection to a server. The method accepts one of the following types for conn_info: - dictionary containing connection information including: (user, passwd, host, port, socket) - connection string in the form: user:pass@host:port:socket or login-path:port:socket or config-path[group] - an instance of the Server class conn_info[in] Connection information throw_errors throw any errors found during the test, false by default. ssl_dict[in] A dictionary with the ssl certificates (ssl_ca, ssl_cert and ssl_key). Returns True if connection success, False if error """ # Parse source connection values try: src_val = get_connection_dictionary(conn_info, ssl_dict) except Exception as err: raise ConnectionValuesError("Server connection values invalid: {0}." "".format(err)) try: conn_options = { 'quiet': True, 'src_name': "test", 'dest_name': None, } s = connect_servers(src_val, None, conn_options) s[0].disconnect() except UtilError: if throw_errors: raise return False return True def check_hostname_alias(server1_vals, server2_vals): """Check to see if the servers are the same machine by host name. server1_vals[in] connection dictionary for server1 server2_vals[in] connection dictionary for server2 Returns bool - true = server1 and server2 are the same host """ server1 = Server({'conn_info': server1_vals}) server2 = Server({'conn_info': server2_vals}) return (server1.is_alias(server2.host) and int(server1.port) == int(server2.port)) def stop_running_server(server, wait=10, drop=True): """Stop a running server. This method will stop a server using the mysqladmin utility to shutdown the server. It also destroys the datadir. server[in] Server instance to clone wait[in] Number of wait cycles for shutdown default = 10 drop[in] If True, drop datadir Returns - True = server shutdown, False - unknown state or error """ # Nothing to do if server is None if server is None: return True # Build the shutdown command res = server.show_server_variable("basedir") mysqladmin_client = "mysqladmin" if not os.name == "posix": mysqladmin_client = "mysqladmin.exe" mysqladmin_path = os.path.normpath(os.path.join(res[0][1], "bin", mysqladmin_client)) if not os.path.exists(mysqladmin_path): mysqladmin_path = os.path.normpath(os.path.join(res[0][1], "client", mysqladmin_client)) if not os.path.exists(mysqladmin_path) and not os.name == 'posix': mysqladmin_path = os.path.normpath(os.path.join(res[0][1], "client/debug", mysqladmin_client)) if not os.path.exists(mysqladmin_path) and not os.name == 'posix': mysqladmin_path = os.path.normpath(os.path.join(res[0][1], "client/release", mysqladmin_client)) if os.name == 'posix': cmd = "'{0}'".format(mysqladmin_path) else: cmd = '"{0}"'.format(mysqladmin_path) if server.socket is None and server.host == 'localhost': server.host = '127.0.0.1' cmd = "{0} shutdown --user={1} --host={2} ".format(cmd, server.user, server.host) if server.passwd: cmd = "{0} --password={1} ".format(cmd, server.passwd) # Use of server socket only works with 'localhost' (not with 127.0.0.1). if server.socket and server.host == 'localhost': cmd = "{0} --socket={1} ".format(cmd, server.socket) else: cmd = "{0} --port={1} ".format(cmd, server.port) if server.has_ssl: if server.ssl_cert is not None: cmd = "{0} --ssl-cert={1} ".format(cmd, server.ssl_cert) if server.ssl_ca is not None: cmd = "{0} --ssl-ca={1} ".format(cmd, server.ssl_ca) if server.ssl_key is not None: cmd = "{0} --ssl-key={1} ".format(cmd, server.ssl_key) res = server.show_server_variable("datadir") datadir = os.path.normpath(res[0][1]) # Kill all connections so shutdown will work correctly res = server.exec_query("SHOW PROCESSLIST") for row in res: if not row[7] or not row[7].upper().startswith("SHOW PROCESS"): try: server.exec_query("KILL CONNECTION %s" % row[0]) except UtilDBError: # Ok to ignore KILL failures pass # disconnect user server.disconnect() # Stop the server f_null = os.devnull f_out = open(f_null, 'w') proc = subprocess.Popen(cmd, shell=True, stdout=f_out, stderr=f_out) ret_val = proc.wait() f_out.close() # if shutdown doesn't work, exit. if int(ret_val) != 0: return False # If datadir exists, delete it if drop: delete_directory(datadir) if os.path.exists("cmd.txt"): try: os.unlink("cmd.txt") except: pass return True def log_server_version(server, level=logging.INFO): """Log server version message. This method will log the server version message. If no log file is provided it will also print the message to stdout. server[in] Server instance. level[in] Level of message to log. Default = INFO. print_version[in] If True, print the message to stdout. Default = True. """ host_port = "{host}:{port}".format(**get_connection_dictionary(server)) version_msg = MSG_MYSQL_VERSION.format(server=host_port, version=server.get_version()) logging.log(level, version_msg) class Server(object): """The Server class can be used to connect to a running MySQL server. The following utilities are provided: - Connect to the server - Retrieve a server variable - Execute a query - Return list of all databases - Return a list of specific objects for a database - Return list of a specific objects for a database - Return list of all indexes for a table - Read SQL statements from a file and execute """ def __init__(self, options=None): """Constructor The method accepts one of the following types for options['conn_info']: - dictionary containing connection information including: (user, passwd, host, port, socket) - connection string in the form: user:pass@host:port:socket or login-path:port:socket - an instance of the Server class options[in] options for controlling behavior: conn_info a dictionary containing connection information (user, passwd, host, port, socket) role Name or role of server (e.g., server, master) verbose print extra data during operations (optional) default value = False charset Default character set for the connection. (default None) """ if options is None: options = {} assert not options.get("conn_info") is None self.verbose = options.get("verbose", False) self.db_conn = None self.host = None self.role = options.get("role", "Server") self.has_ssl = False conn_values = get_connection_dictionary(options.get("conn_info")) try: self.host = conn_values["host"] self.user = conn_values["user"] self.passwd = conn_values["passwd"] \ if "passwd" in conn_values else None self.socket = conn_values["unix_socket"] \ if "unix_socket" in conn_values else None self.port = 3306 if conn_values["port"] is not None: self.port = int(conn_values["port"]) self.charset = options.get("charset", conn_values.get("charset", None)) # Optional values self.ssl_ca = conn_values.get('ssl_ca', None) self.ssl_cert = conn_values.get('ssl_cert', None) self.ssl_key = conn_values.get('ssl_key', None) self.ssl = conn_values.get('ssl', False) if self.ssl_cert or self.ssl_ca or self.ssl_key or self.ssl: self.has_ssl = True except KeyError: raise UtilError("Dictionary format not recognized.") self.connect_error = None # Set to TRUE when foreign key checks are ON. Check with # foreign_key_checks_enabled. self.fkeys = None self.autocommit = None self.read_only = False self.aliases = set() self.grants_enabled = None self._version = None @classmethod def fromServer(cls, server, conn_info=None): """ Create a new server instance from an existing one Factory method that will allow the creation of a new server instance from an existing server. server[in] instance object that must be instance of the Server class or a subclass. conn_info[in] A dictionary with the connection information to connect to the server Returns an instance of the calling class as a result. """ if isinstance(server, Server): options = {"role": server.role, "verbose": server.verbose, "charset": server.charset} if conn_info is not None and isinstance(conn_info, dict): options["conn_info"] = conn_info else: options["conn_info"] = server.get_connection_values() return cls(options) else: raise TypeError("The server argument's type is neither Server nor " "a subclass of Server") def is_alive(self): """Determine if connection to server is still alive. Returns bool - True = alive, False = error or cannot connect. """ res = True try: if self.db_conn is None: res = False else: # ping and is_connected only work partially, try exec_query # to make sure connection is really alive retval = self.db_conn.is_connected() if retval: self.exec_query("SHOW DATABASES") else: res = False except: res = False return res def _update_alias(self, ip_or_hostname, suffix_list): """Update list of aliases for the given IP or hostname. Gets the list of aliases for host *ip_or_hostname*. If any of them matches one of the server's aliases, then update the list of aliases (self.aliases). It also receives a list (tuple) of suffixes that can be ignored when checking if two hostnames are the same. ip_or_hostname[in] IP or hostname to test. suffix_list[in] Tuple with list of suffixes that can be ignored. Returns True if ip_or_hostname is a server alias, otherwise False. """ host_or_ip_aliases = self._get_aliases(ip_or_hostname) host_or_ip_aliases.add(ip_or_hostname) # Check if any of aliases matches with one the servers's aliases common_alias = self.aliases.intersection(host_or_ip_aliases) if common_alias: # There are common aliases, host is the same self.aliases.update(host_or_ip_aliases) return True else: # Check with and without suffixes no_suffix_server_aliases = set() no_suffix_host_aliases = set() for suffix in suffix_list: # Add alias with and without suffix from self.aliases for alias in self.aliases: if alias.endswith(suffix): try: host, _ = alias.rsplit('.', 1) no_suffix_host_aliases.add(host) except: pass # Ok if parts don't split correctly no_suffix_server_aliases.add(alias) # Add alias with and without suffix from host_aliases for alias in host_or_ip_aliases: if alias.endswith(suffix): try: host, _ = alias.rsplit('.', 1) no_suffix_host_aliases.add(host) except: pass # Ok if parts don't split correctly no_suffix_host_aliases.add(alias) # Check if there is any alias in common common_alias = no_suffix_host_aliases.intersection( no_suffix_server_aliases) if common_alias: # Same host, so update self.aliases self.aliases.update( no_suffix_host_aliases.union(no_suffix_server_aliases) ) return True return False def _get_aliases(self, host): """Gets the aliases for the given host """ aliases = set([clean_IPv6(host)]) if hostname_is_ip(clean_IPv6(host)): # IP address try: my_host = socket.gethostbyaddr(clean_IPv6(host)) aliases.add(my_host[0]) # socket.gethostbyname_ex() does not work with ipv6 if (not my_host[0].count(":") < 1 or not my_host[0] == "ip6-localhost"): host_ip = socket.gethostbyname_ex(my_host[0]) else: addrinfo = socket.getaddrinfo(my_host[0], None) host_ip = ([socket.gethostbyaddr(addrinfo[0][4][0])], [fiveple[4][0] for fiveple in addrinfo], [addrinfo[0][4][0]]) except (socket.gaierror, socket.herror, socket.error) as err: host_ip = ([], [], []) if self.verbose: print("WARNING: IP lookup by address failed for {0}," "reason: {1}".format(host, err.strerror)) else: try: # server may not really exist. host_ip = socket.gethostbyname_ex(host) except (socket.gaierror, socket.herror, socket.error) as err: if self.verbose: print("WARNING: hostname: {0} may not be reachable, " "reason: {1}".format(host, err.strerror)) return aliases aliases.add(host_ip[0]) addrinfo = socket.getaddrinfo(host, None) local_ip = None error = None for addr in addrinfo: try: local_ip = socket.gethostbyaddr(addr[4][0]) break except (socket.gaierror, socket.herror, socket.error) as err: error = err if local_ip: host_ip = ([local_ip[0]], [fiveple[4][0] for fiveple in addrinfo], [addrinfo[0][4][0]]) else: host_ip = ([], [], []) if self.verbose: print("WARNING: IP lookup by name failed for {0}," "reason: {1}".format(host, error.strerror)) aliases.update(set(host_ip[1])) aliases.update(set(host_ip[2])) return aliases def is_alias(self, host_or_ip): """Determine if host_or_ip is an alias for this host host_or_ip[in] host or IP number to check Returns bool - True = host_or_ip is an alias """ # List of possible suffixes suffixes = ('.local', '.lan', '.localdomain') host_or_ip = clean_IPv6(host_or_ip.lower()) # for quickness, verify in the existing aliases, if they exist. if self.aliases: if host_or_ip.lower() in self.aliases: return True else: # get the alias for the given host_or_ip return self._update_alias(host_or_ip, suffixes) # no previous aliases information # First, get the local information hostname_ = socket.gethostname() try: local_info = socket.gethostbyname_ex(hostname_) local_aliases = set([local_info[0].lower()]) # if dotted host name, take first part and use as an alias try: local_aliases.add(local_info[0].split('.')[0]) except: pass local_aliases.update(['127.0.0.1', 'localhost', '::1', '[::1]']) local_aliases.update(local_info[1]) local_aliases.update(local_info[2]) local_aliases.update(self._get_aliases(hostname_)) except (socket.herror, socket.gaierror, socket.error) as err: if self.verbose: print("WARNING: Unable to find aliases for hostname" " '{0}' reason: {1}".format(hostname_, str(err))) # Try with the basic local aliases. local_aliases = set(['127.0.0.1', 'localhost', '::1', '[::1]']) # Get the aliases for this server host self.aliases = self._get_aliases(self.host) # Check if this server is local for host in self.aliases.copy(): if host in local_aliases: # Is local then save the local aliases for future. self.aliases.update(local_aliases) break # Handle special suffixes in hostnames. for suffix in suffixes: if host.endswith(suffix): # Remove special suffix and attempt to match with local # aliases. host, _ = host.rsplit('.', 1) if host in local_aliases: # Is local then save the local aliases for future. self.aliases.update(local_aliases) break # Check if the given host_or_ip is alias of the server host. if host_or_ip in self.aliases: return True # Check if any of the aliases of ip_or_host is also an alias of the # host server. return self._update_alias(host_or_ip, suffixes) def user_host_exists(self, user, host_or_ip): """Check to see if a user, host exists This method attempts to see if a user name matches the users on the server and that any user, host pair can match the host or IP address specified. This attempts to resolve wildcard matches. user[in] user name host_or_ip[in] host or IP address Returns string - host from server that matches the host_or_ip or None if no match. """ res = self.exec_query("SELECT host FROM mysql.user WHERE user = '%s' " "AND '%s' LIKE host " % (user, host_or_ip)) if res: return res[0][0] return None def get_connection_values(self): """Return a dictionary of connection values for the server. Returns dictionary """ conn_vals = { "user": self.user, "host": self.host } if self.passwd: conn_vals["passwd"] = self.passwd if self.socket: conn_vals["socket"] = self.socket if self.port: conn_vals["port"] = self.port if self.ssl_ca: conn_vals["ssl_ca"] = self.ssl_ca if self.ssl_cert: conn_vals["ssl_cert"] = self.ssl_cert if self.ssl_key: conn_vals["ssl_key"] = self.ssl_key if self.ssl: conn_vals["ssl"] = self.ssl return conn_vals def connect(self, log_version=False): """Connect to server Attempts to connect to the server as specified by the connection parameters. log_version[in] If True, log server version. Default = False. Note: This method must be called before executing queries. Raises UtilError if error during connect """ try: self.db_conn = self.get_connection() if log_version: log_server_version(self) # If no charset provided, get it from the "character_set_client" # server variable. if not self.charset: res = self.show_server_variable('character_set_client') self.db_conn.set_charset_collation(charset=res[0][1]) self.charset = res[0][1] if self.ssl: res = self.exec_query("SHOW STATUS LIKE 'Ssl_cipher'") if res[0][1] == '': raise UtilError("Can not encrypt server connection.") except UtilError: # Reset any previous value if the connection cannot be established, # before raising an exception. This prevents the use of a broken # database connection. self.db_conn = None raise self.connect_error = None self.read_only = self.show_server_variable("READ_ONLY")[0][1] def get_connection(self): """Return a new connection to the server. Attempts to connect to the server as specified by the connection parameters and returns a connection object. Return the resulting MySQL connection object or raises an UtilError if an error occurred during the server connection process. """ try: parameters = { 'user': self.user, 'host': self.host, 'port': self.port, } if self.socket and os.name == "posix": parameters['unix_socket'] = self.socket if self.passwd and self.passwd != "": parameters['passwd'] = self.passwd if self.charset: parameters['charset'] = self.charset parameters['host'] = parameters['host'].replace("[", "") parameters['host'] = parameters['host'].replace("]", "") # Add SSL parameters ONLY if they are not None if self.ssl_ca is not None: parameters['ssl_ca'] = self.ssl_ca if self.ssl_cert is not None: parameters['ssl_cert'] = self.ssl_cert if self.ssl_key is not None: parameters['ssl_key'] = self.ssl_key # When at least one of cert, key or ssl options are specified, # the ca option is not required for establishing the encrypted # connection, but C/py will not allow the None value for the ca # option, so we use an empty string i.e '' to avoid an error from # C/py about ca option being the None value. if ('ssl_cert' in parameters.keys() or 'ssl_key' in parameters.keys() or self.ssl) and \ 'ssl_ca' not in parameters: parameters['ssl_ca'] = '' # The ca certificate is verified only if the ssl option is also # specified. if self.ssl and parameters['ssl_ca']: parameters['ssl_verify_cert'] = True if self.has_ssl: cpy_flags = [ClientFlag.SSL, ClientFlag.SSL_VERIFY_SERVER_CERT] parameters['client_flags'] = cpy_flags db_conn = mysql.connector.connect(**parameters) # Return MySQL connection object. return db_conn except mysql.connector.Error as err: raise UtilError(err.msg, err.errno) except AttributeError as err: raise UtilError(str(err)) def disconnect(self): """Disconnect from the server. """ try: self.db_conn.disconnect() except: pass def get_version(self): """Return version number of the server. Get the server version. The respective instance variable is set with the result after querying the server the first time. The version is immediately returned when already known, avoiding querying the server at each time. Returns string - version string or None if error """ # Return the local version value if already known. if self._version: return self._version # Query the server for its version. try: res = self.show_server_variable("VERSION") if res: self._version = res[0][1] except UtilError: # Ignore errors and return _version, initialized with None. pass return self._version def check_version_compat(self, t_major, t_minor, t_rel): """Checks version of the server against requested version. This method can be used to check for version compatibility. t_major[in] target server version (major) t_minor[in] target server version (minor) t_rel[in] target server version (release) Returns bool True if server version is GE (>=) version specified, False if server version is LT (<) version specified """ version_str = self.get_version() if version_str is not None: match = re.match(r'^(\d+\.\d+(\.\d+)*).*$', version_str.strip()) if match: version = [int(x) for x in match.group(1).split('.')] version = (version + [0])[:3] # Ensure a 3 elements list return version >= [int(t_major), int(t_minor), int(t_rel)] else: return False return True def exec_query(self, query_str, options=None, exec_timeout=0): """Execute a query and return result set This is the singular method to execute queries. It should be the only method used as it contains critical error code to catch the issue with mysql.connector throwing an error on an empty result set. Note: will handle exception and print error if query fails Note: if fetchall is False, the method returns the cursor instance query_str[in] The query to execute options[in] Options to control behavior: params Parameters for query columns Add column headings as first row (default is False) fetch Execute the fetch as part of the operation and use a buffered cursor (default is True) raw If True, use a buffered raw cursor (default is True) commit Perform a commit (if needed) automatically at the end (default: True). exec_timeout[in] Timeout value in seconds to kill the query execution if exceeded. Value must be greater than zero for this feature to be enabled. By default 0, meaning that the query will not be killed. Returns result set or cursor """ if options is None: options = {} params = options.get('params', ()) columns = options.get('columns', False) fetch = options.get('fetch', True) raw = options.get('raw', True) do_commit = options.get('commit', True) # Guard for connect() prerequisite assert self.db_conn, "You must call connect before executing a query." # If we are fetching all, we need to use a buffered if fetch: if raw: if mysql.connector.__version_info__ < (2, 0): cur = self.db_conn.cursor(buffered=True, raw=True) else: cur = self.db_conn.cursor( cursor_class=MySQLUtilsCursorBufferedRaw) else: cur = self.db_conn.cursor(buffered=True) else: if mysql.connector.__version_info__ < (2, 0): cur = self.db_conn.cursor(raw=True) else: cur = self.db_conn.cursor(cursor_class=MySQLUtilsCursorRaw) # Execute query, handling parameters. q_killer = None try: if exec_timeout > 0: # Spawn thread to kill query if timeout is reached. # Note: set it as daemon to avoid waiting for it on exit. q_killer = QueryKillerThread(self, query_str, exec_timeout) q_killer.daemon = True q_killer.start() # Execute query. if params == (): cur.execute(query_str) else: cur.execute(query_str, params) except mysql.connector.Error as err: cur.close() if err.errno == CR_SERVER_LOST and exec_timeout > 0: # If the connection is killed (because the execution timeout is # reached), then it attempts to re-establish it (to execute # further queries) and raise a specific exception to track this # event. # CR_SERVER_LOST = Errno 2013 Lost connection to MySQL server # during query. self.db_conn.reconnect() raise UtilError("Timeout executing query", err.errno) else: raise UtilDBError("Query failed. {0}".format(err)) except Exception: cur.close() raise UtilError("Unknown error. Command: {0}".format(query_str)) finally: # Stop query killer thread if alive. if q_killer and q_killer.is_alive(): q_killer.stop() # Fetch rows (only if available or fetch = True). if cur.with_rows: if fetch or columns: try: results = cur.fetchall() if columns: col_headings = cur.column_names col_names = [] for col in col_headings: col_names.append(col) results = col_names, results except mysql.connector.Error as err: raise UtilDBError("Error fetching all query data: " "{0}".format(err)) finally: cur.close() return results else: # Return cursor to fetch rows elsewhere (fetch = false). return cur else: # No results (not a SELECT) try: if do_commit: self.db_conn.commit() except mysql.connector.Error as err: raise UtilDBError("Error performing commit: {0}".format(err)) finally: cur.close() return cur def commit(self): """Perform a COMMIT. """ # Guard for connect() prerequisite assert self.db_conn, "You must call connect before executing a query." self.db_conn.commit() def rollback(self): """Perform a ROLLBACK. """ # Guard for connect() prerequisite assert self.db_conn, "You must call connect before executing a query." self.db_conn.rollback() def show_server_variable(self, variable): """Returns one or more rows from the SHOW VARIABLES command. variable[in] The variable or wildcard string Returns result set """ return self.exec_query("SHOW VARIABLES LIKE '%s'" % variable) def select_variable(self, var_name, var_type=None): """Get server system variable value using SELECT statement. This function displays the value of system variables using the SELECT statement. This can be used as a workaround for variables with very long values, as SHOW VARIABLES is subject to a version-dependent display-width limit. Note: Some variables may not be available using SELECT @@var_name, in such cases use SHOW VARIABLES LIKE 'var_name'. var_name[in] Name of the variable to display. var_type[in] Type of the variable ('session' or 'global'). By default no type is used, meaning that the session value is returned if it exists and the global value otherwise. Return the value for the given server system variable. """ if var_type is None: var_type = '' elif var_type.lower() in ('global', 'session', ''): var_type = '{0}.'.format(var_type) # Add dot (.) else: raise UtilDBError("Invalid variable type: {0}. Supported types: " "'global' and 'session'.".format(var_type)) # Execute SELECT @@[var_type.]var_name. # Note: An error is issued if the given variable is not known. res = self.exec_query("SELECT @@{0}{1}".format(var_type, var_name)) return res[0][0] def flush_logs(self, log_type=None): """Execute the FLUSH [log_type] LOGS statement. Reload internal logs cache and closes and reopens all log files, or only of the specified log_type. Note: The log_type option is available from MySQL 5.5.3. log_type[in] Type of the log files to be flushed. Supported values: BINARY, ENGINE, ERROR, GENERAL, RELAY, SLOW. """ if log_type: self.exec_query("FLUSH {0} LOGS".format(log_type)) else: self.exec_query("FLUSH LOGS") def get_uuid(self): """Return the uuid for this server if it is GTID aware. Returns uuid or None if server is not GTID aware. """ if self.supports_gtid() != "NO": res = self.show_server_variable("server_uuid") return res[0][1] return None def supports_gtid(self): """Determine if server supports GTIDs Returns string - 'ON' = gtid supported and turned on, 'OFF' = supported but not enabled, 'NO' = not supported """ # Check servers for GTID support version_ok = self.check_version_compat(5, 6, 5) if not version_ok: return "NO" try: res = self.exec_query("SELECT @@GLOBAL.GTID_MODE") except: return "NO" return res[0][0] def check_gtid_version(self): """Determine if server supports latest GTID changes This method checks the server to ensure it contains the latest changes to the GTID variables (from version 5.6.9). Raises UtilRplError when errors occur. """ errors = [] if not self.supports_gtid() == "ON": errors.append(" GTID is not enabled.") if not self.check_version_compat(5, 6, 9): errors.append(" Server version must be 5.6.9 or greater.") if errors: errors = "\n".join(errors) errors = "\n".join([_GTID_ERROR % (self.host, self.port), errors]) raise UtilRplError(errors) def check_gtid_executed(self, operation="copy"): """Check to see if the gtid_executed variable is clear If the value is not clear, raise an error with appropriate instructions for the user to correct the issue. operation[in] Name of the operation (copy, import, etc.) default = copy """ res = self.exec_query("SHOW GLOBAL VARIABLES LIKE 'gtid_executed'")[0] if res[1].strip() == '': return err = ("The {0} operation contains GTID statements " "that require the global gtid_executed system variable on the " "target to be empty (no value). The gtid_executed value must " "be reset by issuing a RESET MASTER command on the target " "prior to attempting the {0} operation. " "Once the global gtid_executed value is cleared, you may " "retry the {0}.").format(operation) raise UtilRplError(err) def get_gtid_executed(self, skip_gtid_check=True): """Get the executed GTID set of the server. This function retrieves the (current) GTID_EXECUTED set of the server. skip_gtid_check[in] Flag indicating if the check for GTID support will be skipped or not. By default 'True' (check is skipped). Returns a string with the GTID_EXECUTED set for this server. """ if not skip_gtid_check: # Check server for GTID support. gtid_support = self.supports_gtid() == "NO" if gtid_support == 'NO': raise UtilRplError("Global Transaction IDs are not supported.") elif gtid_support == 'OFF': raise UtilError("Global Transaction IDs are not enabled.") # Get GTID_EXECUTED. try: return self.exec_query("SELECT @@GLOBAL.GTID_EXECUTED")[0][0] except UtilError: if skip_gtid_check: # Query likely failed because GTIDs are not supported, # therefore skip error in this case. return "" else: # If GTID check is not skipped re-raise exception. raise except IndexError: # If no rows are returned by query then return an empty string. return '' def gtid_subtract(self, gtid_set, gtid_subset): """Subtract given GTID sets. This function invokes GTID_SUBTRACT function on the server to retrieve the GTIDs from the given gtid_set that are not in the specified gtid_subset. gtid_set[in] Base GTID set to subtract the subset from. gtid_subset[in] GTID subset to be subtracted from the base set. Return a string with the GTID set resulting from the subtraction of the specified gtid_subset from the gtid_set. """ try: return self.exec_query( "SELECT GTID_SUBTRACT('{0}', '{1}')".format(gtid_set, gtid_subset) )[0][0] except IndexError: # If no rows are returned by query then return an empty string. return '' def gtid_subtract_executed(self, gtid_set): """Subtract GTID_EXECUTED to the given GTID set. This function invokes GTID_SUBTRACT function on the server to retrieve the GTIDs from the given gtid_set that are not in the GTID_EXECUTED set. gtid_set[in] Base GTID set to subtract the GTID_EXECUTED. Return a string with the GTID set resulting from the subtraction of the GTID_EXECUTED set from the specified gtid_set. """ from mysql.utilities.common.topology import _GTID_SUBTRACT_TO_EXECUTED try: result = self.exec_query( _GTID_SUBTRACT_TO_EXECUTED.format(gtid_set) )[0][0] # Remove newlines (\n and/or \r) from the GTID set string returned # by the server. return result.replace('\n', '').replace('\r', '') except IndexError: # If no rows are returned by query then return an empty string. return '' def inject_empty_trx(self, gtid, gtid_next_automatic=True): """ Inject an empty transaction. This method injects an empty transaction on the server for the given GTID. Note: SUPER privilege is required for this operation, more precisely to set the GTID_NEXT variable. gtid[in] GTID for the empty transaction to inject. gtid_next_automatic[in] Indicate if the GTID_NEXT is set to AUTOMATIC after injecting the empty transaction. By default True. """ self.exec_query("SET GTID_NEXT='{0}'".format(gtid)) self.exec_query("BEGIN") self.commit() if gtid_next_automatic: self.exec_query("SET GTID_NEXT='AUTOMATIC'") def set_gtid_next_automatic(self): """ Set GTID_NEXT to AUTOMATIC. """ self.exec_query("SET GTID_NEXT='AUTOMATIC'") def checksum_table(self, tbl_name, exec_timeout=0): """Compute checksum of specified table (CHECKSUM TABLE tbl_name). This function executes the CHECKSUM TABLE statement for the specified table and returns the result. The CHECKSUM is aborted (query killed) if a timeout value (greater than zero) is specified and the execution takes longer than the specified time. tbl_name[in] Name of the table to perform the checksum. exec_timeout[in] Maximum execution time (in seconds) of the query after which it will be killed. By default 0, no timeout. Returns a tuple with the checksum result for the target table. The first tuple element contains the result from the CHECKSUM TABLE query or None if an error occurred (e.g. execution timeout reached). The second element holds any error message or None if the operation was successful. """ try: return self.exec_query( "CHECKSUM TABLE {0}".format(tbl_name), exec_timeout=exec_timeout )[0], None except IndexError: # If no rows are returned by query then return None. return None, "No data returned by CHECKSUM TABLE" except UtilError as err: # Return None if the query is killed (exec_timeout reached). return None, err.errmsg def get_gtid_status(self): """Get the GTID information for the server. This method attempts to retrieve the GTID lists. If the server does not have GTID turned on or does not support GTID, the method will throw and exception. Returns [list, list, list] """ # Check servers for GTID support if self.supports_gtid() == "NO": raise UtilError("Global Transaction IDs are not supported.") res = self.exec_query("SELECT @@GLOBAL.GTID_MODE") if res[0][0].upper() == 'OFF': raise UtilError("Global Transaction IDs are not enabled.") gtid_data = [self.exec_query("SELECT @@GLOBAL.GTID_EXECUTED")[0], self.exec_query("SELECT @@GLOBAL.GTID_PURGED")[0], self.exec_query("SELECT @@GLOBAL.GTID_OWNED")[0]] return gtid_data def check_rpl_user(self, user, host): """Check replication user exists and has the correct privileges. user[in] user name of rpl_user host[in] host name of rpl_user Returns [] - no exceptions, list if exceptions found """ errors = [] ipv6 = False if "]" in host: ipv6 = True host = clean_IPv6(host) result = self.user_host_exists(user, host) if ipv6: result = format_IPv6(result) if result is None or result == []: errors.append("The replication user %s@%s was not found " "on %s:%s." % (user, host, self.host, self.port)) else: rpl_user = User(self, "%s@" % user + result) if not rpl_user.has_privilege('*', '*', 'REPLICATION SLAVE'): errors.append("Replication user does not have the " "correct privilege. She needs " "'REPLICATION SLAVE' on all replicated " "databases.") return errors def supports_plugin(self, plugin): """Check if the given plugin is supported. Check to see if the server supports a plugin. Return True if plugin installed and active. plugin[in] Name of plugin to check Returns True if plugin is supported, and False otherwise. """ _PLUGIN_QUERY = ("SELECT * FROM INFORMATION_SCHEMA.PLUGINS " "WHERE PLUGIN_NAME ") res = self.exec_query("".join([_PLUGIN_QUERY, "LIKE ", "'%s" % plugin, "%'"])) if not res: return False # Now see if it is active. elif res[0][2] != 'ACTIVE': return False return True def get_all_databases(self, ignore_internal_dbs=True): """Return a result set containing all databases on the server except for internal databases (mysql, INFORMATION_SCHEMA, PERFORMANCE_SCHEMA). Note: New internal database 'sys' added by default for MySQL 5.7.7+. Returns result set """ if ignore_internal_dbs: _GET_DATABASES = """ SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME != 'INFORMATION_SCHEMA' AND SCHEMA_NAME != 'PERFORMANCE_SCHEMA' AND SCHEMA_NAME != 'mysql' """ # Starting from MySQL 5.7.7, sys schema is installed by default. if self.check_version_compat(5, 7, 7): _GET_DATABASES = "{0} AND SCHEMA_NAME != 'sys'".format( _GET_DATABASES) else: _GET_DATABASES = """ SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA """ return self.exec_query(_GET_DATABASES) def get_storage_engines(self): """Return list of storage engines on this server. Returns (list) (engine, support, comment) """ _QUERY = """ SELECT UPPER(engine), UPPER(support) FROM INFORMATION_SCHEMA.ENGINES ORDER BY engine """ return self.exec_query(_QUERY) def check_storage_engines(self, other_list): """Compare storage engines from another server. This method compares the list of storage engines for the current server against a list supplied as **other_list**. It returns two lists - one for the storage engines on this server not on the other list, and another for the storage engines on the other list not on this server. Note: type case sensitive - make sure list is in uppercase other_list[in] A list from another server in the form (engine, support) - same output as get_storage_engines() Returns (list, list) """ # Guard for connect() prerequisite assert self.db_conn, "You must call connect before check engine lists." def _convert_set_to_list(set_items): """Convert a set to list """ if len(set_items) > 0: item_list = [] for item in set_items: item_list.append(item) else: item_list = None return item_list # trivial, but guard against misuse this_list = self.get_storage_engines() if other_list is None: return (this_list, None) same = set(this_list) & set(other_list) master_extra = _convert_set_to_list(set(this_list) - same) slave_extra = _convert_set_to_list(set(other_list) - same) return (master_extra, slave_extra) def has_storage_engine(self, target): """Check to see if an engine exists and is supported. target[in] name of engine to find Returns bool True - engine exists and is active, false = does not exist or is not supported/not active/disabled """ if len(target) == 0: return True # This says we will use default engine on the server. if target is not None: engines = self.get_storage_engines() for engine in engines: if engine[0].upper() == target.upper() and \ engine[1].upper() in ['YES', 'DEFAULT']: return True return False def substitute_engine(self, tbl_name, create_str, new_engine, def_engine, quiet=False): """Replace storage engine in CREATE TABLE This method will replace the storage engine in the CREATE statement under the following conditions: - If new_engine is specified and it exists on destination, use it. - Else if existing engine does not exist and def_engine is specfied and it exists on destination, use it. Also, don't substitute if the existing engine will not be changed. tbl_name[in] table name create_str[in] CREATE statement new_engine[in] name of storage engine to substitute (convert to) def_engine[in] name of storage engine to use if existing engines does not exist Returns string CREATE string with replacements if found, else return original string """ res = [create_str] exist_engine = '' is_create_like = False replace_msg = "# Replacing ENGINE=%s with ENGINE=%s for table %s." add_msg = "# Adding missing ENGINE=%s clause for table %s." if new_engine is not None or def_engine is not None: i = create_str.find("ENGINE=") if i > 0: j = create_str.find(" ", i) exist_engine = create_str[i + 7:j] else: # Check if it is a CREATE TABLE LIKE statement is_create_like = (create_str.find("CREATE TABLE {0} LIKE" "".format(tbl_name)) == 0) # Set default engine # # If a default engine is specified and is not the same as the # engine specified in the table CREATE statement (existing engine) if # specified, and both engines exist on the server, replace the existing # engine with the default engine. # if def_engine is not None and \ exist_engine.upper() != def_engine.upper() and \ self.has_storage_engine(def_engine) and \ self.has_storage_engine(exist_engine): # If no ENGINE= clause present, add it if len(exist_engine) == 0: if is_create_like: alter_str = "ALTER TABLE {0} ENGINE={1}".format(tbl_name, def_engine) res = [create_str, alter_str] else: i = create_str.find(";") i = len(create_str) if i == -1 else i create_str = "{0} ENGINE={1};".format(create_str[0:i], def_engine) res = [create_str] # replace the existing storage engine else: create_str.replace("ENGINE=%s" % exist_engine, "ENGINE=%s" % def_engine) if not quiet: if len(exist_engine) > 0: print replace_msg % (exist_engine, def_engine, tbl_name) else: print add_msg % (def_engine, tbl_name) exist_engine = def_engine # Use new engine if (new_engine is not None and exist_engine.upper() != new_engine.upper() and self.has_storage_engine(new_engine)): if len(exist_engine) == 0: if is_create_like: alter_str = "ALTER TABLE {0} ENGINE={1}".format(tbl_name, new_engine) res = [create_str, alter_str] else: i = create_str.find(";") i = len(create_str) if i == -1 else i create_str = "{0} ENGINE={1};".format(create_str[0:i], new_engine) res = [create_str] else: create_str = create_str.replace("ENGINE=%s" % exist_engine, "ENGINE=%s" % new_engine) res = [create_str] if not quiet: if len(exist_engine) > 0: print replace_msg % (exist_engine, new_engine, tbl_name) else: print add_msg % (new_engine, tbl_name) return res def get_innodb_stats(self): """Return type of InnoDB engine and its version information. This method returns a tuple containing the type of InnoDB storage engine (builtin or plugin) and the version number reported. Returns (tuple) (type = 'builtin' or 'plugin', version_number, have_innodb = True or False) """ # Guard for connect() prerequisite assert self.db_conn, "You must call connect before get innodb stats." _BUILTIN = """ SELECT (support='YES' OR support='DEFAULT' OR support='ENABLED') AS `exists` FROM INFORMATION_SCHEMA.ENGINES WHERE engine = 'innodb'; """ _PLUGIN = """ SELECT (plugin_library LIKE 'ha_innodb_plugin%') AS `exists` FROM INFORMATION_SCHEMA.PLUGINS WHERE LOWER(plugin_name) = 'innodb' AND LOWER(plugin_status) = 'active'; """ _VERSION = """ SELECT plugin_version, plugin_type_version FROM INFORMATION_SCHEMA.PLUGINS WHERE LOWER(plugin_name) = 'innodb'; """ inno_type = None results = self.exec_query(_BUILTIN) if results is not None and results != () and results[0][0] is not None: inno_type = "builtin" results = self.exec_query(_PLUGIN) if results is not None and results != () and \ results != [] and results[0][0] is not None: inno_type = "plugin " results = self.exec_query(_VERSION) version = [] if results is not None: version.append(results[0][0]) version.append(results[0][1]) else: version.append(None) version.append(None) results = self.show_server_variable("have_innodb") if results is not None and results != [] and \ results[0][1].lower() == "yes": have_innodb = True else: have_innodb = False return (inno_type, version[0], version[1], have_innodb) def read_and_exec_SQL(self, input_file, verbose=False): """Read an input file containing SQL statements and execute them. input_file[in] The full path to the file verbose[in] Print the command read Default = False Returns True = success, False = error TODO : Make method read multi-line queries. """ f_input = open(input_file) res = True while True: cmd = f_input.readline() if not cmd: break res = None if len(cmd) > 1: if cmd[0] != '#': if verbose: print cmd query_options = { 'fetch': False } res = self.exec_query(cmd, query_options) f_input.close() return res def binlog_enabled(self): """Check binary logging status for the client. Returns bool - True - binary logging is ON, False = OFF """ res = self.show_server_variable("log_bin") if not res: raise UtilRplError("Cannot retrieve status of log_bin variable.") if res[0][1] in ("OFF", "0"): return False return True def toggle_binlog(self, action="disable"): """Enable or disable binary logging for the client. Note: user must have SUPER privilege action[in] if 'disable', turn off the binary log elif 'enable' turn binary log on do nothing if action != 'enable' or 'disable' """ if action.lower() == 'disable': self.exec_query("SET SQL_LOG_BIN=0") elif action.lower() == 'enable': self.exec_query("SET SQL_LOG_BIN=1") def foreign_key_checks_enabled(self, force=False): """Check foreign key status for the connection. force[in] if True, returns the value directly from the server instead of returning the cached fkey value Returns bool - True - foreign keys are enabled """ if self.fkeys is None or force: res = self.exec_query("SELECT @@GLOBAL.foreign_key_checks") self.fkeys = (res is not None) and (res[0][0] == "1") return self.fkeys def disable_foreign_key_checks(self, disable=True): """Enable or disable foreign key checks for the connection. disable[in] if True, turn off foreign key checks elif False turn foreign key checks on. """ if self.fkeys is None: self.foreign_key_checks_enabled() # Only do something if foreign keys are OFF and shouldn't be disabled # or if they are ON and should be disabled if self.fkeys == disable: val = "OFF" if disable else "ON" self.exec_query(_FOREIGN_KEY_SET.format(val), {'fetch': False, 'commit': False}) self.fkeys = not self.fkeys def autocommit_set(self): """Check autocommit status for the connection. Returns bool - True if autocommit is enabled and False otherwise. """ if self.autocommit is None: res = self.show_server_variable('autocommit') self.autocommit = (res and res[0][1] == '1') return self.autocommit def toggle_autocommit(self, enable=None): """Enable or disable autocommit for the connection. This method switch the autocommit value or enable/disable it according to the given parameter. enable[in] if True, turn on autocommit (set to 1) else if False turn autocommit off (set to 0). """ if enable is None: # Switch autocommit value. if self.autocommit is None: # Get autocommit value if unknown self.autocommit_set() if self.autocommit: value = '0' self.autocommit = False else: value = '1' self.autocommit = True else: # Set AUTOCOMMIT according to provided value. if enable: value = '1' self.autocommit = True else: value = '0' self.autocommit = False # Change autocommit value. self.exec_query(_AUTOCOMMIT_SET.format(value), {'fetch': 'false'}) def get_server_id(self): """Retrieve the server id. Returns int - server id. """ try: res = self.show_server_variable("server_id") except: raise UtilRplError("Cannot retrieve server id from " "%s." % self.role) return int(res[0][1]) def get_server_uuid(self): """Retrieve the server uuid. Returns string - server uuid. """ try: res = self.show_server_variable("server_uuid") if res is None or res == []: return None except: raise UtilRplError("Cannot retrieve server_uuid from " "%s." % self.role) return res[0][1] def get_lctn(self): """Get lower_case_table_name setting. Returns lctn value or None if cannot get value """ res = self.show_server_variable("lower_case_table_names") if res != []: return res[0][1] return None def get_binary_logs(self, options=None): """Return a list of the binary logs. options[in] query options Returns list - binlogs or None if binary logging turned off """ if options is None: options = {} if self.binlog_enabled(): return self.exec_query("SHOW BINARY LOGS", options) return None def set_read_only(self, on=False): """Turn read only mode on/off on[in] if True, turn read_only ON Default is False """ # Only turn on|off read only if it were off at connect() if not self.read_only: return self.exec_query("SET @@GLOBAL.READ_ONLY = %s" % "ON" if on else "OFF") return None def grant_tables_enabled(self): """Check to see if grant tables are enabled Returns bool - True = grant tables are enabled, False = disabled """ if self.grants_enabled is None: try: self.exec_query("SHOW GRANTS FOR 'snuffles'@'host'") self.grants_enabled = True except UtilError as error: if "--skip-grant-tables" in error.errmsg: self.grants_enabled = False # Ignore other errors as they are not pertinent to the check else: self.grants_enabled = True return self.grants_enabled def get_server_binlogs_list(self, include_size=False): """Find the binlog file names listed on a server. Obtains the binlog file names available on the server by using the 'SHOW BINARY LOGS' query at the given server instance and returns these file names as a list. include_size[in] Boolean value to indicate if the returning list shall include the size of the file. Returns a list with the binary logs names available on master. """ res = self.exec_query("SHOW BINARY LOGS") server_binlogs = [] for row in res: if include_size: server_binlogs.append(row) else: server_binlogs.append(row[0]) return server_binlogs class QueryKillerThread(threading.Thread): """Class to run a thread to kill an executing query. This class is used to spawn a thread than will kill the execution (connection) of a query upon reaching a given timeout. """ def __init__(self, server, query, timeout): """Constructor. server[in] Server instance where the target query is executed. query[in] Target query to kill. timeout[in] Timeout value in seconds used to kill the query when reached. """ threading.Thread.__init__(self) self._stop_event = threading.Event() self._query = query self._timeout = timeout self._server = server self._connection = server.get_connection() server.get_version() def run(self): """Main execution of the query killer thread. Stop the thread if instructed as such """ connector_error = None # Kill the query connection upon reaching the given execution timeout. while not self._stop_event.is_set(): # Wait during the defined time. self._stop_event.wait(self._timeout) # If the thread was asked to stop during wait, it does not try to # kill the query. if not self._stop_event.is_set(): try: if mysql.connector.__version_info__ < (2, 0): cur = self._connection.cursor(raw=True) else: cur = self._connection.cursor( cursor_class=MySQLUtilsCursorRaw) # Get process information from threads table when available # (for versions > 5.6.1), since it does not require a mutex # and has minimal impact on server performance. if self._server.check_version_compat(5, 6, 1): cur.execute( "SELECT processlist_id " "FROM performance_schema.threads" " WHERE processlist_command='Query'" " AND processlist_info='{0}'".format(self._query)) else: cur.execute( "SELECT id FROM information_schema.processlist" " WHERE command='Query'" " AND info='{0}'".format(self._query)) result = cur.fetchall() try: process_id = result[0][0] except IndexError: # No rows are returned if the query ended in the # meantime. process_id = None # Kill the connection associated to que process id. # Note: killing the query will not work with # connector-python,since it will hang waiting for the # query to return. if process_id: cur.execute("KILL {0}".format(process_id)) except mysql.connector.Error as err: # Hold error to raise at the end. connector_error = err finally: # Close cursor if available. if cur: cur.close() # Stop this thread. self.stop() # Close connection. try: self._connection.disconnect() except mysql.connector.Error: # Only raise error if no previous error has occurred. if not connector_error: raise finally: # Raise any previous error that already occurred. if connector_error is not None: # pylint: disable=E0702 raise connector_error def stop(self): """Stop the thread. Set the event flag for the thread to stop as soon as possible. """ self._stop_event.set() mysql-utilities-1.6.4/mysql/utilities/common/user.py0000755001577100752670000007625512747670311022414 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains and abstraction of a MySQL user object. """ import re from collections import namedtuple, defaultdict from mysql.utilities.common.grants_info import filter_grants from mysql.utilities.exception import UtilError, UtilDBError, FormatError from mysql.utilities.common.ip_parser import parse_connection, clean_IPv6 from mysql.utilities.common.messages import ERROR_USER_WITHOUT_PRIVILEGES from mysql.utilities.common.pattern_matching import parse_object_name from mysql.utilities.common.sql_transform import (is_quoted_with_backticks, quote_with_backticks) def change_user_privileges(server, user_name, user_passwd, host, grant_list=None, revoke_list=None, disable_binlog=False, create_user=False): """ Change the privileges of a new or existing user. This method GRANT or REVOKE privileges to a new user (creating it) or existing user. server[in] MySQL server instances to apply changes (from mysql.utilities.common.server.Server). user_name[in] user name to apply changes. user_passwd[in] user's password. host[in] host name associated to the user account. grant_list[in] List of privileges to GRANT. revoke_list[in] List of privileges to REVOKE. disable_binlog[in] Boolean value to determine if the binary logging will be disabled to perform this operation (and re-enabled at the end). By default: False (do not disable binary logging). create_user[in] Boolean value to determine if the user will be created before changing its privileges. By default: False (do no create user). """ if disable_binlog: server.exec_query("SET SQL_LOG_BIN=0") if create_user: server.exec_query("CREATE USER '{0}'@'{1}' IDENTIFIED BY " "'{2}'".format(user_name, host, user_passwd)) if grant_list: grants_str = ", ".join(grant_list) server.exec_query("GRANT {0} ON *.* TO '{1}'@'{2}' IDENTIFIED BY " "'{3}'".format(grants_str, user_name, host, user_passwd)) if revoke_list: revoke_str = ", ".join(revoke_list) server.exec_query("REVOKE {0} ON *.* FROM '{1}'@'{2}'" "".format(revoke_str, user_name, host)) if disable_binlog: server.exec_query("SET SQL_LOG_BIN=1") def parse_user_host(user_name): """Parse user, passwd, host, port from user:passwd@host user_name[in] MySQL user string (user:passwd@host) """ no_ticks = user_name.replace("'", "") try: conn_values = parse_connection(no_ticks) except FormatError: raise UtilError("Cannot parse user:pass@host : %s." % no_ticks) return (conn_values['user'], conn_values['passwd'], conn_values['host']) def grant_proxy_ssl_privileges(server, user, passw, at='localhost', privs="ALL PRIVILEGES", grant_opt=True, ssl=True, grant_proxy=True, proxy_user='root', proxy_host='localhost'): """Grant privileges to an user in a server with GRANT OPTION or/and REQUIRE SSL if required. server[in] Server to execute the grant query at. user_name[in] New user name. passw[in] password of the new user. at[in] Used in GRANT "TO '{0}'@'{1}'".format(user, at), (default localhost) grant_opt[in] if True, it will grant with GRANT OPTION (default True). ssl[in] if True, it will set REQUIRE SSL (default True). grant_proxy[in] if True, it will grant GRANT PROXY (default True). proxy_user[in] username for the proxied account (default: root) proxy_host[in] hostname for the proxied account (default: localhost) Note: Raises UtilError on any Error. """ grant = [ "GRANT", privs, "ON *.*", "TO '{0}'@'{1}'".format(user, at), "IDENTIFIED BY '{0}'".format(passw) if passw else "", "REQUIRE SSL" if ssl else "", "WITH GRANT OPTION" if grant_opt else "" ] try: server.exec_query(" ".join(grant)) except UtilDBError as err: raise UtilError("Cannot create new user {0} at {1}:{2} reason:" "{3}".format(user, server.host, server.port, err.errmsg)) if grant_proxy: grant = ("GRANT PROXY ON '{0}'@'{1}' " "TO '{2}'@'{3}' " "WITH GRANT OPTION").format(proxy_user, proxy_host, user, at) try: server.exec_query(grant) except UtilDBError as err: raise UtilError("Cannot grant proxy to user {0} at {1}:{2} " "reason:{3}".format(user, server.host, server.port, err.errmsg)) def check_privileges(server, operation, privileges, description, verbosity=0, reporter=None): """Check required privileges. This method check if the used user possess the required privileges to execute a statement or operation. An exception is thrown if the user doesn't have enough privileges. server[in] Server instance to check. operation[in] The name of tha task that requires the privileges, used in the error message if an exception is thrown. privileges[in] List of the required privileges. description[in] Description of the operation requiring the User's privileges, used in the message if verbosity if given. verbosity[in] Verbosity. reporter[in] A method to invoke with messages and warnings (by default print). """ # print message with the given reporter. if reporter is None and verbosity > 0: print("# Checking user permission to {0}...\n" "#".format(description)) elif reporter is not None and verbosity > 0: reporter("# Checking user permission to {0}...\n" "#".format(description)) # Check privileges user_obj = User(server, "{0}@{1}".format(server.user, server.host)) need_privileges = [] for privilege in privileges: if not user_obj.has_privilege('*', '*', privilege): need_privileges.append(privilege) if len(need_privileges) > 0: if len(need_privileges) > 1: privileges_needed = "{0} and {1}".format( ", ".join(need_privileges[:-1]), need_privileges[-1] ) else: privileges_needed = need_privileges[0] raise UtilError(ERROR_USER_WITHOUT_PRIVILEGES.format( user=server.user, host=server.host, port=server.port, operation=operation, req_privileges=privileges_needed )) class User(object): """ The User class can be used to clone the user and its grants to another user with the following utilities: - Parsing user@host:passwd strings - Create, Drop user - Check to see if user exists - Retrieving and printing grants for user """ def __init__(self, server1, user, verbosity=0): """Constructor server1[in] Server class user[in] MySQL user credentials string (user@host:passwd) verbose[in] print extra data during operations (optional) default value = False """ self.server1 = server1 if server1.db_conn: self.sql_mode = self.server1.select_variable("SQL_MODE") else: self.sql_mode = "" self.user, self.passwd, self.host = parse_user_host(user) self.verbosity = verbosity self.current_user = None self.grant_dict = None self.global_grant_dict = None self.grant_list = None self.global_grant_list = None self.query_options = { 'fetch': False } def create(self, new_user=None, authentication=None): """Create the user Attempts to create the user. If the operation fails, an error is generated and printed. new_user[in] MySQL user string (user@host:passwd) (optional) If omitted, operation is performed on the class instance user name. authentication[in] Special authentication clause for non-native authentication plugins """ auth_str = "SELECT * FROM INFORMATION_SCHEMA.PLUGINS WHERE " \ "PLUGIN_NAME = '{0}' AND PLUGIN_STATUS = 'ACTIVE';" query_str = "CREATE USER " user, passwd, host = None, None, None if new_user: user, passwd, host = parse_user_host(new_user) query_str += "'%s'@'%s' " % (user, host) else: query_str += "'%s'@'%s' " % (self.user, self.host) passwd = self.passwd if passwd and authentication: print("WARNING: using a password and an authentication plugin is " "not permited. The password will be used instead of the " "authentication plugin.") if passwd: query_str += "IDENTIFIED BY '{0}'".format(passwd) elif authentication: # need to validate authentication plugin res = self.server1.exec_query(auth_str.format(authentication)) if (res is None) or (res == []): raise UtilDBError("Plugin {0} not loaded or not active. " "Cannot create user.".format(authentication)) query_str += "IDENTIFIED WITH '{0}'".format(authentication) if self.verbosity > 0: print query_str self.server1.exec_query(query_str, self.query_options) def drop(self, new_user=None): """Drop user from the server Attempts to drop the user. If the operation fails, an error is generated and printed. new_user[in] MySQL user string (user@host:passwd) (optional) If omitted, operation is performed on the class instance user name. """ query_str = "DROP USER " if new_user: user, _, host = parse_user_host(new_user) query_str += "'%s'@'%s' " % (user, host) else: query_str += "'%s'@'%s' " % (self.user, self.host) if self.verbosity > 0: print query_str try: self.server1.exec_query(query_str, self.query_options) except UtilError: return False return True def exists(self, user_name=None): """Check to see if the user exists user_name[in] MySQL user string (user@host:passwd) (optional) If omitted, operation is performed on the class instance user name. return True = user exists, False = user does not exist """ user, host, _ = self.user, self.host, self.passwd if user_name: user, _, host = parse_user_host(user_name) res = self.server1.exec_query("SELECT * FROM mysql.user " "WHERE user = %s and host = %s", {'params': (user, host)}) return (res is not None and len(res) >= 1) @staticmethod def _get_grants_as_dict(grant_list, verbosity=0, sql_mode=''): """Transforms list of grant string statements into a dictionary. grant_list[in] List of grant strings as returned from the server Returns a default_dict with the grant information """ grant_dict = defaultdict(lambda: defaultdict(set)) for grant in grant_list: grant_tpl = User._parse_grant_statement(grant[0], sql_mode) # Ignore PROXY privilege, it is not yet supported if verbosity > 0: if 'PROXY' in grant_tpl: print("#WARNING: PROXY privilege will be ignored.") grant_tpl.privileges.discard('PROXY') if grant_tpl.privileges: grant_dict[grant_tpl.db][grant_tpl.object].update( grant_tpl.privileges) return grant_dict def get_grants(self, globals_privs=False, as_dict=False, refresh=False): """Retrieve the grants for the current user globals_privs[in] Include global privileges in clone (i.e. user@%) as_dict[in] If True, instead of a list of plain grant strings, return a dictionary with the grants. refresh[in] If True, reads grant privileges directly from the server and updates cached values, otherwise uses the cached values. returns result set or None if no grants defined """ # only read values from server if needed if refresh or not self.grant_list or not self.global_grant_list: # Get the users' connection user@host if not retrieved if self.current_user is None: res = self.server1.exec_query("SELECT CURRENT_USER()") parts = res[0][0].split('@') # If we're connected as some other user, use the user@host # defined at instantiation if parts[0] != self.user: host = clean_IPv6(self.host) self.current_user = "'%s'@'%s'" % (self.user, host) else: self.current_user = "'%s'@'%s'" % (parts[0], parts[1]) grants = [] try: res = self.server1.exec_query("SHOW GRANTS FOR " "{0}".format(self.current_user)) for grant in res: grants.append(grant) except UtilDBError: pass # Error here is ok - no grants found. # Cache user grants self.grant_list = grants[:] self.grant_dict = User._get_grants_as_dict(self.grant_list, self.verbosity, self.sql_mode) # If current user is already using global host wildcard '%', there # is no need to run the show grants again. if globals_privs: if self.host != '%': try: res = self.server1.exec_query( "SHOW GRANTS FOR '{0}'{1}".format(self.user, "@'%'")) for grant in res: grants.append(grant) self.global_grant_list = grants[:] self.global_grant_dict = User._get_grants_as_dict( self.global_grant_list, self.verbosity) except UtilDBError: # User has no global privs, return the just the ones # for current host self.global_grant_list = self.grant_list self.global_grant_dict = self.grant_dict else: # if host is % then we already have the global privs self.global_grant_list = self.grant_list self.global_grant_dict = self.grant_dict if globals_privs: if as_dict: return self.global_grant_dict else: return self.global_grant_list else: if as_dict: return self.grant_dict else: return self.grant_list def get_grants_for_object(self, qualified_obj_name, obj_type_str, global_privs=False): """ Retrieves the list of grants that the current user has that that have effect over a given object. qualified_obj_name[in] String with the qualified name of the object. obj_type_str[in] String with the type of the object that we are working with, must be one of 'ROUTINE', 'TABLE' or 'DATABASE'. global_privs[in] If True, the wildcard'%' host privileges are also taken into account This method takes the MySQL privilege hierarchy into account, e.g, if the qualified object is a table, it returns all the grant statements for this user regarding that table, as well as the grant statements for this user regarding the db where the table is at and finally any global grants that the user might have. Returns a list of strings with the grant statements. """ grant_stm_lst = self.get_grants(global_privs) m_objs = parse_object_name(qualified_obj_name, self.sql_mode) grants = [] if not m_objs: raise UtilError("Cannot parse the specified qualified name " "'{0}'".format(qualified_obj_name)) else: db_name, obj_name = m_objs # Quote database and object name if necessary if not is_quoted_with_backticks(db_name, self.sql_mode): db_name = quote_with_backticks(db_name, self.sql_mode) if obj_name and obj_name != '*': if not is_quoted_with_backticks(obj_name, self.sql_mode): obj_name = quote_with_backticks(obj_name, self.sql_mode) # For each grant statement look for the ones that apply to this # user and object for grant_stm in grant_stm_lst: grant_tpl = self._parse_grant_statement(grant_stm[0], self.sql_mode) if grant_tpl: # Check if any of the privileges applies to this object # and if it does then check if it inherited from this # statement if filter_grants(grant_tpl.privileges, obj_type_str): # Add global grants if grant_tpl.db == '*': grants.append(grant_stm[0]) continue # Add database level grants if grant_tpl.db == db_name and grant_tpl.object == '*': grants.append(grant_stm[0]) continue # If it is an object, add existing object level grants # as well. if obj_name: if (grant_tpl.db == db_name and grant_tpl.object == obj_name): grants.append(grant_stm[0]) return grants def has_privilege(self, db, obj, access, allow_skip_grant_tables=True): """Check to see user has a specific access to a db.object. db[in] Name of database obj[in] Name of object access[in] MySQL privilege to check (e.g. SELECT, SUPER, DROP) allow_skip_grant_tables[in] If True, allow silent failure for cases where the server is started with --skip-grant-tables. Default=True Returns True if user has access, False if not """ grants_enabled = self.server1.grant_tables_enabled() # If grants are disabled and it is Ok to allow skipped grant tables, # return True - privileges disabled so user can do anything. if allow_skip_grant_tables and not grants_enabled: return True # Convert privilege to upper cases. access = access.upper() # Get grant dictionary grant_dict = self.get_grants(globals_privs=True, as_dict=True) # If self has all privileges for all databases, no need to check, # simply return True if ("ALL PRIVILEGES" in grant_dict['*']['*'] and "GRANT OPTION" in grant_dict['*']['*']): return True # Quote db and obj with backticks if necessary if not is_quoted_with_backticks(db, self.sql_mode) and db != '*': db = quote_with_backticks(db, self.sql_mode) if not is_quoted_with_backticks(obj, self.sql_mode) and obj != '*': obj = quote_with_backticks(obj, self.sql_mode) # USAGE privilege is the same as no privileges, # so everyone has it. if access == "USAGE": return True # Even if we have ALL PRIVILEGES grant, we might not have WITH GRANT # OPTION privilege. # Check server wide grants. elif (access in grant_dict['*']['*'] or "ALL PRIVILEGES" in grant_dict['*']['*'] and access != "GRANT OPTION"): return True # Check database level grants. elif (access in grant_dict[db]['*'] or "ALL PRIVILEGES" in grant_dict[db]['*'] and access != "GRANT OPTION"): return True # Check object level grants. elif (access in grant_dict[db][obj] or "ALL PRIVILEGES" in grant_dict[db][obj] and access != "GRANT OPTION"): return True else: return False def contains_user_privileges(self, user, plus_grant_option=False): """Checks if privileges of given user are a subset of self's privileges user[in] instance of the user class plus_grant_option[in] if True, checks if besides the all the other privileges, self has also the GRANT OPTION in all of the bd, tables in which the user passed as argument has privileges. Required for instance if we will be using self to clone the user. return_missing[in] if True, return a set with the missing grants instead of simply a boolean value. Returns True if the grants of the user passed as argument are a subset of the grants of self, otherwise returns False. """ user_grants = user.get_grants(as_dict=True) # If we are cloning User1, using User2, then User2 needs # the GRANT OPTION privilege in each of the db,table where # User1 has privileges. if plus_grant_option: for db in user_grants: for table in user_grants[db]: priv_set = user_grants[db][table] # Ignore empty grant sets that might exist as a # consequence of consulting the defaultdict. if priv_set: # Ignore USAGE grant as it means no privileges. if (len(priv_set) == 1 and "USAGE" in priv_set): continue else: priv_set.add('GRANT OPTION') for db in user_grants: for table in user_grants[db]: priv_set = user_grants[db][table] for priv in priv_set: if self.has_privilege(db, table, priv): continue else: return False return True def missing_user_privileges(self, user, plus_grant_option=False): """Checks if privileges of given user are a subset of self's privileges user[in] instance of the user class plus_grant_option[in] if True, checks if besides the all the other privileges, self has also the GRANT OPTION in all of the bd, tables in which the user passed as argument has privileges. Required for instance if we will be using self to clone the user. return_missing[in] if True, return a set with the missing grants instead of simply a boolean value. Returns empty set if the grants of the user passed as argument are a subset of the grants of self, otherwise a set with the missing privileges from self. """ user_grants = user.get_grants(as_dict=True) missing_grants = set() # If we are cloning User1, using User2, then User2 needs # the GRANT OPTION privilege in each of the db,table where # User1 has privileges. if plus_grant_option: for db in user_grants: for table in user_grants[db]: priv_set = user_grants[db][table] # Ignore empty grant sets that might exist as a # consequence of consulting the defaultdict. if priv_set: # Ignore USAGE grant as it means no privileges. if (len(priv_set) == 1 and "USAGE" in priv_set): continue else: priv_set.add('GRANT OPTION') for db in user_grants: for table in user_grants[db]: priv_set = user_grants[db][table] for priv in priv_set: if self.has_privilege(db, table, priv): continue else: missing_grants.add((priv, db, table)) return missing_grants def print_grants(self): """Display grants for the current user""" res = self.get_grants(True) for grant_tuple in res: print grant_tuple[0] def _get_authentication(self): res = self.server1.exec_query("SELECT plugin FROM mysql.user " "WHERE user='{0}' and host='{1}'" "".format(self.user, self.host)) if res == [] or res[0][0] == 'mysql_native_password': return None return res[0][0] def clone(self, new_user, destination=None, globals_privs=False): """Clone the current user to the new user Operation will create the new user account copying all of the grants for the current user to the new user. If operation fails, an error message is generated and the process halts. new_name[in] MySQL user string (user@host:passwd) destination[in] A connection to a new server to clone the user (default is None) globals_privs[in] Include global privileges in clone (i.e. user@%) Note: Caller must ensure the new user account does not exist. """ res = self.get_grants(globals_privs) server = self.server1 if destination is not None: server = destination for row in res: # Create an instance of the user class. user = User(server, new_user, self.verbosity) if not user.exists(): # Get authentication plugin if different from native plugin auth = self._get_authentication() # Add authentication if available user.create(authentication=auth) if globals_privs and '%' in row[0]: base_user_ticks = "'" + self.user + "'@'" + '%' + "'" else: base_user_ticks = "'" + self.user + "'@'" + self.host + "'" user, _, host = parse_user_host(new_user) new_user_ticks = "'" + user + "'@'" + host + "'" grant = row[0].replace(base_user_ticks, new_user_ticks, 1) # Need to remove the IDENTIFIED BY clause for the base user. search_str = "IDENTIFIED BY PASSWORD" try: start = grant.index(search_str) except: start = 0 if start > 0: end = grant.index("'", start + len(search_str) + 2) + 2 grant = grant[0:start] + grant[end:] if self.verbosity > 0: print grant res = server.exec_query(grant, self.query_options) @staticmethod def _parse_grant_statement(statement, sql_mode=''): """ Returns a namedtuple with the parsed GRANT information. statement[in] Grant string in the sql format returned by the server. Returns named tuple with GRANT information or None. """ grant_parse_re = re.compile(r""" GRANT\s(.+)?\sON\s # grant or list of grants (?:(?:PROCEDURE\s)|(?:FUNCTION\s))? # optional for routines only (?:(?:(\*|`?[^']+`?)\.(\*|`?[^']+`?)) # object where grant applies | ('[^']*'@'[^']*')) # For proxy grants user/host \sTO\s([^@]+@[\S]+) # grantee (?:\sIDENTIFIED\sBY\sPASSWORD (?:(?:\s)|(?:\s\'[^\']+\')?))? # optional pwd (?:\sREQUIRE\sSSL)? # optional SSL (\sWITH\sGRANT\sOPTION)? # optional grant option $ # End of grant statement """, re.VERBOSE) grant_tpl_factory = namedtuple("grant_info", "privileges proxy_user " "db object user") match = re.match(grant_parse_re, statement) if match: # quote database name and object name with backticks if match.group(1).upper() != 'PROXY': db = match.group(2) if not is_quoted_with_backticks(db, sql_mode) and db != '*': db = quote_with_backticks(db, sql_mode) obj = match.group(3) if not is_quoted_with_backticks(obj, sql_mode) and obj != '*': obj = quote_with_backticks(obj, sql_mode) else: # if it is not a proxy grant db = obj = None grants = grant_tpl_factory( # privileges set([priv.strip() for priv in match.group(1).split(",")]), match.group(4), # proxied user db, # database obj, # object match.group(5), # user ) # If user has grant option, add it to the list of privileges if match.group(6) is not None: grants.privileges.add("GRANT OPTION") else: raise UtilError("Unable to parse grant statement " "{0}".format(statement)) return grants mysql-utilities-1.6.4/mysql/utilities/common/variables.py0000644001577100752670000000777712747670311023406 0ustar pb2usercommon# # Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains classes and functions used to manage a user-defined variables. """ import re from mysql.utilities.common.format import print_dictionary_list class Variables(dict): """ The Variables class contains user-defined variables for replacement in custom commands. """ def __init__(self, options=None, data=None): """Constructor options[in] Width data[in] Data to initialize class """ self.options = options or {} self.width = options.get('width', 80) super(Variables, self).__init__(data or {}) def find_variable(self, name): """Find a variable This method searches for a variable in the list and returns it if found. name[in] Name of variable Returns dict - variable if found, None if not found. """ if name in self: return {name: self[name]} return None def add_variable(self, name, value): """Add variable to the list name[in] Name of variable value[in] Value to store """ self[name] = value def get_matches(self, prefix): """Get a list of variables that match a prefix This method returns a list of the variables that match the first N characters specified by var_prefix. var_prefix[in] Prefix for search Returns list - matches or [] for no matches """ result = [] for key, value in self.iteritems(): if key.startswith(prefix): result.append({key: value}) return result def show_variables(self, variables=None): """Display variables This method displays the variables included in the list passed or all variables is list passed is empty. variables[in] List of variables """ if self.options.get("quiet", False): return var_list = [{'name': key, 'value': value} for key, value in self.iteritems()] print "\n" if not self: print "There are no variables defined.\n" return print_dictionary_list(['Variable', 'Value'], ['name', 'value'], var_list, self.width) print def replace_variables(self, cmd_string): """Replace all instances of variables with their values. This method will search a string for all variables designated by the '$' prefix and replace it with values from the list. cmd_string[in] String to search Returns string - string with variables replaced """ new_cmd = cmd_string finds = re.findall(r'\$(\w+)', cmd_string) for variable in finds: try: new_cmd = new_cmd.replace('$' + variable, str(self[variable])) except KeyError: # something useful when variable was not found? pass return new_cmd def search_by_key(self, pattern): """Find value by key pattern pattern[in] regex pattern Returns tuple - key, value """ regex = re.compile(pattern) for key, value in self.iteritems(): if regex.match(key): yield key, value mysql-utilities-1.6.4/mysql/utilities/common/ip_parser.py0000644001577100752670000007361612747670311023415 0ustar pb2usercommon# # Copyright (c) 2010, 2015, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains the following methods design to support common operations over the ip address or hostnames among the multiple utilities. Methods: parse_connection() Parse connection parameters """ import re import os import logging from mysql.utilities.exception import UtilError, FormatError from mysql.utilities.common.my_print_defaults import (MyDefaultsReader, my_login_config_exists, my_login_config_path) from mysql.utilities.common.options_parser import MySQLOptionsParser log = logging.getLogger('ip_parser') _BAD_CONN_FORMAT = (u"Connection '{0}' cannot be parsed. Please review the " u"used connection string (accepted formats: " u"[:]@[:][:] or " u"[:][:])") _BAD_QUOTED_HOST = u"Connection '{0}' has a malformed quoted host" _UNPARSED_CONN_FORMAT = ("Connection '{0}' not parsed completely. Parsed " "elements '{1}', unparsed elements '{2}'") _CONN_USERPASS = re.compile( r"(?P[\'\"]?)" # First quote r"(?P.+?)" # User name r"(?:(?P=fquote))" # First quote match r"(?:\:" # Optional : r"(?P[\'\"]?)" # Second quote r"(?P.+)" # Password r"(?P=squote))" # Second quote match r"|(?P[\'\"]?)" # Quote on single user name r"(?P.+)" # Single user name r"(?:(?P=sfquote))" # Quote match on single user name ) _CONN_QUOTEDHOST = re.compile( r"((?:^[\'].*[\'])|(?:^[\"].*[\"]))" # quoted host name r"(?:\:(\d+))?" # Optional port number r"(?:\:([\/\\w+.\w+.\-]+))?" # Optional path to socket ) _CONN_LOGINPATH = re.compile( r"((?:\\\"|[^:])+|(?:\\\'|[^:])+)" # login-path r"(?:\:(\d+))?" # Optional port number r"(?:\:([\/\\w+.\w+.\-]+))?" # Optional path to socket ) _CONN_CONFIGPATH = re.compile( r"([\w\:]+(?:\\\"|[^[])+|(?:\\\'|[^[])+)" # config-path r"(?:\[([^]]+))?", # group re.U ) _CONN_ANY_HOST = re.compile( r"""([\w\.]*%) (?:\:{0,1}(.*)) # capture all the rest """, re.VERBOSE) _CONN_HOST_NAME = re.compile( r"""( (?: (?: (?: (?!-) # must not start with hyphen '-' (?:[\w\d-])* # must not end with the hyphen [A-Za-z] # starts with a character from the alphabet (?:[\w\d-])* (?: (? 1: comma_idx = message.rfind(",") message = "{0} and {1}".format(message[:comma_idx], message[comma_idx + 1:]) pluralize = "s" if len(missing_options) > 1 else "" raise UtilError("Missing connection value{0} for " "{1} option{0}".format(pluralize, message)) # optional options, available only on config_path_data if config_path_data: ssl_ca = config_path_data.get('ssl-ca', None) ssl_cert = config_path_data.get('ssl-cert', None) ssl_key = config_path_data.get('ssl-key', None) ssl = config_path_data.get('ssl', None) else: if login_path and not config_path: raise UtilError("No login credentials found for login-path: " "{0}. Please review the used connection string" ": {1}".format(login_path, connection_values)) elif not login_path and config_path: raise UtilError("No login credentials found for config-path: " "{0}. Please review the used connection string" ": {1}".format(login_path, connection_values)) elif login_path and config_path: raise UtilError("No login credentials found for either " "login-path: '{0}' nor config-path: '{1}'. " "Please review the used connection string: {2}" "".format(login_path, config_path, connection_values)) elif len(conn_format) == 2: # Handle as in the format: user[:password]@host[:port][:socket] userpass, hostportsock = conn_format # Get user, password match = _CONN_USERPASS.match(userpass) if not match: raise FormatError(_BAD_CONN_FORMAT.format(connection_values)) user = match.group('user') if user is None: # No password provided user = match.group('suser').rstrip(':') passwd = match.group('passwd') # Handle host, port and socket if len(hostportsock) <= 0: raise FormatError(_BAD_CONN_FORMAT.format(connection_values)) if hostportsock[0] in ['"', "'"]: # need to strip the quotes host, port, socket = _match(_CONN_QUOTEDHOST, hostportsock) if host[0] == '"': host = host.strip('"') if host[0] == "'": host = host.strip("'") else: host, port, socket, _ = parse_server_address(hostportsock) else: # Unrecognized format raise FormatError(_BAD_CONN_FORMAT.format(connection_values)) # Get character-set from options if isinstance(options, dict): charset = options.get("charset", None) # If one SSL option was found before, not mix with those in options. if not ssl_cert and not ssl_ca and not ssl_key and not ssl: ssl_cert = options.get("ssl_cert", None) ssl_ca = options.get("ssl_ca", None) ssl_key = options.get("ssl_key", None) ssl = options.get("ssl", None) else: # options is an instance of optparse.Values try: charset = options.charset # pylint: disable=E1103 except AttributeError: charset = None # If one SSL option was found before, not mix with those in options. if not ssl_cert and not ssl_ca and not ssl_key and not ssl: try: ssl_cert = options.ssl_cert # pylint: disable=E1103 except AttributeError: ssl_cert = None try: ssl_ca = options.ssl_ca # pylint: disable=E1103 except AttributeError: ssl_ca = None try: ssl_key = options.ssl_key # pylint: disable=E1103 except AttributeError: ssl_key = None try: ssl = options.ssl # pylint: disable=E1103 except AttributeError: ssl = None # Set parsed connection values connection = { "user": user, "host": host, "port": int(port) if port else 3306, "passwd": passwd if passwd else '' } if charset: connection["charset"] = charset if ssl_cert: connection["ssl_cert"] = ssl_cert if ssl_ca: connection["ssl_ca"] = ssl_ca if ssl_key: connection["ssl_key"] = ssl_key if ssl: connection["ssl"] = ssl # Handle optional parameters. They are only stored in the dict if # they were provided in the specifier. if socket is not None and os.name == "posix": connection['unix_socket'] = socket return connection def parse_server_address(connection_str): """Parses host, port and socket from the given connection string. Returns a tuple of (host, port, socket, add_type) where add_type is the name of the parser that successfully parsed the hostname from the connection string. """ # Default values to return. host = None port = None socket = None address_type = None unparsed = None # From the matchers look the one that match a host. for IP_matcher in IP_matchers_list: try: group = _match(IP_matchers[IP_matcher], connection_str) if group: host = group[0] if IP_matcher == ipv6: host = "[%s]" % host if group[1]: part2_port_socket = _match(_CONN_port_ONLY, group[1], trow_error=False) if not part2_port_socket: unparsed = group[1] else: port = part2_port_socket[0] if part2_port_socket[1]: part4 = _match(_CONN_socket_ONLY, part2_port_socket[1], trow_error=False) if not part4: unparsed = part2_port_socket[1] else: socket = part4[0] unparsed = part4[1] # If host is match we stop looking as is the most significant. if host: address_type = IP_matcher break # ignore the error trying to match. except FormatError: pass # we must alert, that the connection could not be parsed. if host is None: raise FormatError(_BAD_CONN_FORMAT.format(connection_str)) _verify_parsing(connection_str, host, port, socket, address_type, unparsed) return host, port, socket, address_type def _verify_parsing(connection_str, host, port, socket, address_type, unparsed): """Verify that the connection string was totally parsed and not parts of it where not matched, otherwise raise an error. """ exp_connection_str = connection_str log.debug("exp_connection_str {0}".format(exp_connection_str)) parsed_connection_list = [] if host: log.debug("host {0}".format(host)) if address_type == ipv6 and "[" not in connection_str: host = host.replace("[", "") host = host.replace("]", "") parsed_connection_list.append(host) if port: log.debug("port {0}".format(port)) parsed_connection_list.append(port) if socket: log.debug("socket {0}".format(socket)) parsed_connection_list.append(socket) parsed_connection = ":".join(parsed_connection_list) log.debug('parsed_connection {0}'.format(parsed_connection)) diff = None if not unparsed: log.debug('not unparsed found, creating diff') diff = connection_str.replace(host, "") if port: diff = diff.replace(port, "") if socket: diff = diff.replace(socket, "") log.debug("diff {0}".format(diff)) log.debug("unparsed {0}".format(unparsed)) if unparsed or (exp_connection_str != parsed_connection and (diff and diff != ":")): log.debug("raising exception") parsed_args = "host:%s, port:%s, socket:%s" % (host, port, socket) log.debug(_UNPARSED_CONN_FORMAT.format(connection_str, parsed_args, unparsed)) raise FormatError(_UNPARSED_CONN_FORMAT.format(connection_str, parsed_args, unparsed)) def _match(pattern, connection_str, trow_error=True): """Tries to match a pattern with the connection string and returns the groups. """ grp = pattern.match(connection_str) if not grp: if trow_error: raise FormatError(_BAD_CONN_FORMAT.format(connection_str)) return False return grp.groups() def clean_IPv6(host_address): """Clean IPv6 host address """ if host_address: host_address = host_address.replace("[", "") host_address = host_address.replace("]", "") return host_address def format_IPv6(host_address): """Format IPv6 host address """ if host_address: if "]" not in host_address: host_address = "[{0}]".format(host_address) return host_address def parse_login_values_config_path(login_values, quietly=True): """Parse the login values to retrieve the user and password from a configuration file. login_values[in] The login values to be parsed. quietly[in] Do not raise exceptions (Default True). returns parsed (user, password) tuple or (login_values, None) tuple. """ try: matches = _match(_CONN_CONFIGPATH, login_values, trow_error=False) if matches: path = matches[0] group = matches[1] data = handle_config_path(path, group, use_default=False) user = data.get('user', None) passwd = data.get('password', None) return user, passwd except (FormatError, UtilError): if not quietly: raise return login_values, None def find_password(value): """Search for password in a string value[in] String to search for password """ if not type(value) == str: return False # has to have an @ sign if '@' not in value: return False match = _CONN_USERPASS.match(value) if not match: return False if match.group('passwd'): return True return False mysql-utilities-1.6.4/mysql/utilities/common/replication_ms.py0000644001577100752670000006514512747670311024437 0ustar pb2usercommon# # Copyright (c) 2014, 2016 Oracle and/or its affiliates. All rights # reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the multi-source replication utility. It is used to setup replication among a slave and multiple masters. """ import os import sys import time import logging from mysql.utilities.exception import FormatError, UtilError, UtilRplError from mysql.utilities.common.daemon import Daemon from mysql.utilities.common.format import print_list from mysql.utilities.common.ip_parser import hostname_is_ip from mysql.utilities.common.messages import (ERROR_USER_WITHOUT_PRIVILEGES, ERROR_MIN_SERVER_VERSIONS, HOST_IP_WARNING) from mysql.utilities.common.options import parse_user_password from mysql.utilities.common.server import connect_servers, get_server_state from mysql.utilities.common.replication import Replication, Master, Slave from mysql.utilities.common.topology import Topology from mysql.utilities.common.user import User from mysql.utilities.common.messages import USER_PASSWORD_FORMAT _MIN_SERVER_VERSION = (5, 6, 9) _GTID_LISTS = ["Transactions executed on the servers:", "Transactions purged from the servers:", "Transactions owned by another server:"] _GEN_UUID_COLS = ["host", "port", "role", "uuid"] _GEN_GTID_COLS = ["host", "port", "role", "gtid"] class ReplicationMultiSource(Daemon): """Setup replication among a slave and multiple masters. This class implements a multi-source replication using a round-robin scheduling for setup replication among all masters and slave. This class also implements a POSIX daemon. """ def __init__(self, slave_vals, masters_vals, options): """Constructor. slave_vals[in] Slave server connection dictionary. master_vals[in] List of master server connection dictionaries. options[in] Options dictionary. """ pidfile = options.get("pidfile", None) if pidfile is None: pidfile = "./rplms_daemon.pid" super(ReplicationMultiSource, self).__init__(pidfile) self.slave_vals = slave_vals self.masters_vals = masters_vals self.options = options self.quiet = self.options.get("quiet", False) self.logging = self.options.get("logging", False) self.rpl_user = self.options.get("rpl_user", None) self.verbosity = options.get("verbosity", 0) self.interval = options.get("interval", 15) self.switchover_interval = options.get("switchover_interval", 60) self.format = self.options.get("format", False) self.topology = None self.report_values = [ report.lower() for report in self.options["report_values"].split(",") ] # A sys.stdout copy, that can be used later to turn on/off stdout self.stdout_copy = sys.stdout self.stdout_devnull = open(os.devnull, "w") # Disable stdout when running --daemon with start, stop or restart self.daemon = options.get("daemon") if self.daemon: if self.daemon in ("start", "nodetach"): self._report("Starting multi-source replication daemon...", logging.INFO, False) elif self.daemon == "stop": self._report("Stopping multi-source replication daemon...", logging.INFO, False) else: self._report("Restarting multi-source replication daemon...", logging.INFO, False) # Disable stdout sys.stdout = self.stdout_devnull else: self._report("# Starting multi-source replication...", logging.INFO) print("# Press CTRL+C to quit.") # Check server versions try: self._check_server_versions() except UtilError as err: raise UtilRplError(err.errmsg) # Check user privileges try: self._check_privileges() except UtilError as err: msg = "Error checking user privileges: {0}".format(err.errmsg) self._report(msg, logging.CRITICAL, False) raise UtilRplError(err.errmsg) @staticmethod def _reconnect_server(server, pingtime=3): """Tries to reconnect to the server. This method tries to reconnect to the server and if connection fails after 3 attemps, returns False. server[in] Server instance. pingtime[in] Interval between connection attempts. """ if server and server.is_alive(): return True is_connected = False i = 0 while i < 3: try: server.connect() is_connected = True break except UtilError: pass time.sleep(pingtime) i += 1 return is_connected def _get_slave(self): """Get the slave server instance. Returns a Server instance of the slave from the replication topology. """ return self.topology.slaves[0]["instance"] def _get_master(self): """Get the current master server instance. Returns a Server instance of the current master from the replication topology. """ return self.topology.master def _check_server_versions(self): """Checks the server versions. """ if self.verbosity > 0: print("# Checking server versions.\n#") # Connection dictionary conn_dict = { "conn_info": None, "quiet": True, "verbose": self.verbosity > 0, } # Check masters version for master_vals in self.masters_vals: conn_dict["conn_info"] = master_vals master = Master(conn_dict) master.connect() if not master.check_version_compat(*_MIN_SERVER_VERSION): raise UtilRplError( ERROR_MIN_SERVER_VERSIONS.format( utility="mysqlrplms", min_version=".".join([str(val) for val in _MIN_SERVER_VERSION]), host=master.host, port=master.port ) ) master.disconnect() # Check slave version conn_dict["conn_info"] = self.slave_vals slave = Slave(conn_dict) slave.connect() if not slave.check_version_compat(*_MIN_SERVER_VERSION): raise UtilRplError( ERROR_MIN_SERVER_VERSIONS.format( utility="mysqlrplms", min_version=".".join([str(val) for val in _MIN_SERVER_VERSION]), host=slave.host, port=slave.port ) ) slave.disconnect() def _check_privileges(self): """Check required privileges to perform the multi-source replication. This method check if the used users for the slave and masters have the required privileges to perform the multi-source replication. The following privileges are required: - on slave: SUPER, SELECT, INSERT, UPDATE, REPLICATION SLAVE AND GRANT OPTION; - on the master: SUPER, SELECT, INSERT, UPDATE, REPLICATION SLAVE AND GRANT OPTION. An exception is thrown if users doesn't have enough privileges. """ if self.verbosity > 0: print("# Checking users privileges for replication.\n#") # Connection dictionary conn_dict = { "conn_info": None, "quiet": True, "verbose": self.verbosity > 0, } # Check privileges for master. master_priv = [('SUPER',), ('SELECT',), ('INSERT',), ('UPDATE',), ('REPLICATION SLAVE',), ('GRANT OPTION',)] master_priv_str = ("SUPER, SELECT, INSERT, UPDATE, REPLICATION SLAVE " "AND GRANT OPTION") for master_vals in self.masters_vals: conn_dict["conn_info"] = master_vals master = Master(conn_dict) master.connect() user_obj = User(master, "{0}@{1}".format(master.user, master.host)) for any_priv_tuple in master_priv: has_privilege = any( [user_obj.has_privilege('*', '*', priv) for priv in any_priv_tuple] ) if not has_privilege: msg = ERROR_USER_WITHOUT_PRIVILEGES.format( user=master.user, host=master.host, port=master.port, operation='perform replication', req_privileges=master_priv_str ) self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) master.disconnect() # Check privileges for slave slave_priv = [('SUPER',), ('SELECT',), ('INSERT',), ('UPDATE',), ('REPLICATION SLAVE',), ('GRANT OPTION',)] slave_priv_str = ("SUPER, SELECT, INSERT, UPDATE, REPLICATION SLAVE " "AND GRANT OPTION") conn_dict["conn_info"] = self.slave_vals slave = Slave(conn_dict) slave.connect() user_obj = User(slave, "{0}@{1}".format(slave.user, slave.host)) for any_priv_tuple in slave_priv: has_privilege = any( [user_obj.has_privilege('*', '*', priv) for priv in any_priv_tuple] ) if not has_privilege: msg = ("User '{0}' on '{1}@{2}' does not have sufficient " "privileges to perform replication (required: {3})." "".format(slave.user, slave.host, slave.port, slave_priv_str)) self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) slave.disconnect() def _check_host_references(self): """Check to see if using all host or all IP addresses. Returns bool - True = all references are consistent. """ uses_ip = hostname_is_ip(self.topology.master.host) slave = self._get_slave() host_port = slave.get_master_host_port() host = None if host_port: host = host_port[0] if (not host or uses_ip != hostname_is_ip(slave.host) or uses_ip != hostname_is_ip(host)): return False return True def _setup_replication(self, master_vals, use_rpl_setup=True): """Setup replication among a master and a slave. master_vals[in] Master server connection dictionary. use_rpl_setup[in] Use Replication.setup() if True otherwise use switch_master() on the slave. This is used to control the first pass in the masters round-robin scheduling. """ conn_options = { "src_name": "master", "dest_name": "slave", "version": "5.0.0", "unique": True, } (master, slave,) = connect_servers(master_vals, self.slave_vals, conn_options) rpl_options = self.options.copy() rpl_options["verbosity"] = self.verbosity > 0 # Start from beginning only on the first pass if rpl_options.get("from_beginning", False) and not use_rpl_setup: rpl_options["from_beginning"] = False # Create an instance of the replication object rpl = Replication(master, slave, rpl_options) if use_rpl_setup: # Check server ids errors = rpl.check_server_ids() for error in errors: self._report(error, logging.ERROR, True) # Check for server_id uniqueness errors = rpl.check_server_uuids() for error in errors: self._report(error, logging.ERROR, True) # Check InnoDB compatibility errors = rpl.check_innodb_compatibility(self.options) for error in errors: self._report(error, logging.ERROR, True) # Checking storage engines errors = rpl.check_storage_engines(self.options) for error in errors: self._report(error, logging.ERROR, True) # Check master for binary logging errors = rpl.check_master_binlog() if not errors == []: raise UtilRplError(errors[0]) # Setup replication if not rpl.setup(self.rpl_user, 10): msg = "Cannot setup replication." self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) else: # Parse user and password (support login-paths) try: (r_user, r_pass,) = parse_user_password(self.rpl_user) except FormatError: raise UtilError (USER_PASSWORD_FORMAT.format("--rpl-user")) # Switch master and start slave slave.switch_master(master, r_user, r_pass) slave.start({'fetch': False}) # Disconnect from servers master.disconnect() slave.disconnect() def _switch_master(self, master_vals, use_rpl_setup=True): """Switches replication to a new master. This method stops replication with the old master if exists and starts the replication with a new one. master_vals[in] Master server connection dictionary. use_rpl_setup[in] Used to control the first pass in the masters round-robin scheduling. """ if self.topology: # Stop slave master = self._get_master() if master.is_alive(): master.disconnect() slave = self._get_slave() if not slave.is_alive() and not self._reconnect_server(slave): msg = "Failed to connect to the slave." self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) slave.stop() slave.disconnect() self._report("# Switching to master '{0}:{1}'." "".format(master_vals["host"], master_vals["port"]), logging.INFO, True) try: # Setup replication on the new master self._setup_replication(master_vals, use_rpl_setup) # Create a Topology object self.topology = Topology(master_vals, [self.slave_vals], self.options) except UtilError as err: msg = "Error while switching master: {0}".format(err.errmsg) self._report(msg, logging.CRITICAL, False) raise UtilRplError(err.errmsg) # Only works for GTID_MODE=ON if not self.topology.gtid_enabled(): msg = ("Topology must support global transaction ids and have " "GTID_MODE=ON.") self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) # Check for mixing IP and hostnames if not self._check_host_references(): print("# WARNING: {0}".format(HOST_IP_WARNING)) self._report(HOST_IP_WARNING, logging.WARN, False) def _report(self, message, level=logging.INFO, print_msg=True): """Log message if logging is on. This method will log the message presented if the log is turned on. Specifically, if options['log_file'] is not None. It will also print the message to stdout. message[in] Message to be printed. level[in] Level of message to log. Default = INFO. print_msg[in] If True, print the message to stdout. Default = True. """ # First, print the message. if print_msg and not self.quiet: print(message) # Now log message if logging turned on if self.logging: logging.log(int(level), message.strip("#").strip(" ")) def _format_health_data(self): """Return health data from topology. Returns tuple - (columns, rows). """ if self.topology: try: health_data = self.topology.get_health() current_master = self._get_master() # Get data for the remaining masters for master_vals in self.masters_vals: # Discard the current master if master_vals["host"] == current_master.host and \ master_vals["port"] == current_master.port: continue # Connect to the master conn_dict = { "conn_info": master_vals, "quiet": True, "verbose": self.verbosity > 0, } master = Master(conn_dict) master.connect() # Get master health rpl_health = master.check_rpl_health() master_data = [ master.host, master.port, "MASTER", get_server_state(master, master.host, 3, self.verbosity > 0), master.supports_gtid(), "OK" if rpl_health[0] else ", ".join(rpl_health[1]), ] # Get master status master_status = master.get_status() if len(master_status): master_log, master_log_pos = master_status[0][0:2] else: master_log = None master_log_pos = 0 # Show additional details if verbosity is turned on if self.verbosity > 0: master_data.extend([master.get_version(), master_log, master_log_pos, "", "", "", "", "", "", "", "", ""]) health_data[1].append(master_data) return health_data except UtilError as err: msg = "Cannot get health data: {0}".format(err) self._report(msg, logging.ERROR, False) raise UtilRplError(msg) return ([], []) def _format_uuid_data(self): """Return the server's uuids. Returns tuple - (columns, rows). """ if self.topology: try: return (_GEN_UUID_COLS, self.topology.get_server_uuids()) except UtilError as err: msg = "Cannot get UUID data: {0}".format(err) self._report(msg, logging.ERROR, False) raise UtilRplError(msg) return ([], []) def _format_gtid_data(self): """Return the GTID information from the topology. Returns tuple - (columns, rows). """ if self.topology: try: return (_GEN_GTID_COLS, self.topology.get_gtid_data()) except UtilError as err: msg = "Cannot get GTID data: {0}".format(err) self._report(msg, logging.ERROR, False) raise UtilRplError(msg) return ([], []) def _log_data(self, title, labels, data, print_format=True): """Helper method to log data. title[in] Title to log. labels[in] List of labels. data[in] List of data rows. """ self._report("# {0}".format(title), logging.INFO) for row in data: msg = ", ".join( ["{0}: {1}".format(*col) for col in zip(labels, row)] ) self._report("# {0}".format(msg), logging.INFO, False) if print_format: print_list(sys.stdout, self.format, labels, data) def _log_master_status(self, master): """Logs the master information. master[in] Master server instance. This method logs the master information from SHOW MASTER STATUS. """ # If no master present, don't print anything. if master is None: return print("#") self._report("# {0}:".format("Current Master Information"), logging.INFO) try: status = master.get_status()[0] except UtilError: msg = "Cannot get master status" self._report(msg, logging.ERROR, False) raise UtilRplError(msg) cols = ("Binary Log File", "Position", "Binlog_Do_DB", "Binlog_Ignore_DB") rows = (status[0] or "N/A", status[1] or "N/A", status[2] or "N/A", status[3] or "N/A") print_list(sys.stdout, self.format, cols, [rows]) self._report("# {0}".format( ", ".join(["{0}: {1}".format(*item) for item in zip(cols, rows)]), ), logging.INFO, False) # Display gtid executed set master_gtids = [] for gtid in status[4].split("\n"): if gtid: # Add each GTID to a tuple to match the required format to # print the full GRID list correctly. master_gtids.append((gtid.strip(","),)) try: if len(master_gtids) > 1: gtid_executed = "{0}[...]".format(master_gtids[0][0]) else: gtid_executed = master_gtids[0][0] except IndexError: gtid_executed = "None" self._report("# GTID Executed Set: {0}".format(gtid_executed), logging.INFO) def stop_replication(self): """Stops multi-source replication. Stop the slave if topology is available. """ if self.topology: # Get the slave instance slave = self._get_slave() # If slave is not connected, try to reconnect and stop replication if self._reconnect_server(slave): slave.stop() slave.disconnect() if self.daemon: self._report("Multi-source replication daemon stopped.", logging.INFO, False) else: print("") self._report("# Multi-source replication stopped.", logging.INFO, True) def stop(self): """Stops the daemon. Stop slave if topology is available and then stop the daemon. """ self.stop_replication() super(ReplicationMultiSource, self).stop() def run(self): """Run the multi-source replication using the round-robin scheduling. This method implements the multi-source replication by using time slices for each master. """ num_masters = len(self.masters_vals) use_rpl_setup = True while True: # Round-robin scheduling on the masters for idx in range(num_masters): # Get the new master values and switch for the next one try: master_vals = self.masters_vals[idx] self._switch_master(master_vals, use_rpl_setup) except UtilError as err: msg = ("Error while switching master: {0}" "".format(err.errmsg)) self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) # Get the new master and slave instances master = self._get_master() slave = self._get_slave() switchover_timeout = time.time() + self.switchover_interval while switchover_timeout > time.time(): # If servers not connected, try to reconnect if not self._reconnect_server(master): msg = ("Failed to connect to the master '{0}:{1}'." "".format(master_vals["host"], master_vals["port"])) self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) if not self._reconnect_server(slave): msg = "Failed to connect to the slave." self._report(msg, logging.CRITICAL, False) raise UtilRplError(msg) # Report self._log_master_status(master) if "health" in self.report_values: (health_labels, health_data,) = \ self._format_health_data() if health_data: print("#") self._log_data("Health Status:", health_labels, health_data) if "gtid" in self.report_values: (gtid_labels, gtid_data,) = self._format_gtid_data() for i, row in enumerate(gtid_data): if row: print("#") self._log_data("GTID Status - {0}" "".format(_GTID_LISTS[i]), gtid_labels, row) if "uuid" in self.report_values: (uuid_labels, uuid_data,) = self._format_uuid_data() if uuid_data: print("#") self._log_data("UUID Status:", uuid_labels, uuid_data) # Disconnect servers master.disconnect() slave.disconnect() # Wait for reporting interval time.sleep(self.interval) # Use Replication.setup() only for the first round use_rpl_setup = False mysql-utilities-1.6.4/mysql/utilities/common/audit_log_parser.py0000644001577100752670000002727312747670311024752 0ustar pb2usercommon# # Copyright (c) 2012, 2015, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains features to parse an audit log file, including searching and displaying the results. """ import re from mysql.utilities.common.audit_log_reader import AuditLogReader from mysql.utilities.exception import UtilError class AuditLogParser(AuditLogReader): """The AuditLogParser class is used to parse the audit log file, applying search criterion and filtering the logged data. """ def __init__(self, options): """Constructor options[in] dictionary of options (e.g. log_name and verbosity) """ self.options = options AuditLogReader.__init__(self, options) self.header_rows = [] self.connects = [] self.rows = [] self.connection_ids = [] # Compile regexp pattern self.regexp_pattern = None if self.options['pattern']: try: self.regexp_pattern = re.compile(self.options['pattern']) except: raise UtilError("Invalid Pattern: " + self.options['pattern']) # Add a space after the query type to reduce false positives. # Note: Although not perfect, this simple trick considerably reduce # false positives, avoiding the use of complex regex (with lower # performance). self.match_qtypes = [] # list of matching SQL statement/command types. self.regexp_comment = None self.regexp_quoted = None self.regexp_backtick = None if self.options['query_type']: # Generate strings to match query types for qt in self.options['query_type']: if qt == "commit": # COMMIT is an exception (can appear alone without spaces) self.match_qtypes.append(qt) else: self.match_qtypes.append("{0} ".format(qt)) # Compile regexp to match comments (/*...*/) to be ignored/removed. self.regexp_comment = re.compile(r'/\*.*?\*/', re.DOTALL) # Compile regexp to match single quoted text ('...') to be ignored. self.regexp_quoted = re.compile(r"'.*?'", re.DOTALL) # Compile regexp to match text between backticks (`) to be ignored. self.regexp_backtick = re.compile(r'`.*?`', re.DOTALL) def parse_log(self): """Parse audit log records, apply search criteria and store results. """ # Find and store records matching search criteria for record, line in self.get_next_record(): name = record.get("NAME") name_case = name.upper() # The variable matching_record is used to avoid unnecessary # executions the match_* function of the remaining search criteria # to check, as it suffice that one match fails to not store the # records in the results. This implementation technique was applied # to avoid the use of too deep nested if-else statements that will # make the code more complex and difficult to read and understand, # trying to optimize the execution performance. matching_record = True if name_case == 'AUDIT': # Store audit start record self.header_rows.append(record) # Apply filters and search criteria if self.options['users']: self._track_new_users_connection_id(record, name_case) # Check if record matches users search criteria if not self.match_users(record): matching_record = False # Check if record matches event type criteria if (matching_record and self.options['event_type'] and not self.match_event_type(record, self.options['event_type'])): matching_record = False # Check if record matches status criteria if (matching_record and self.options['status'] and not self.match_status(record, self.options['status'])): matching_record = False # Check if record matches datetime range criteria if (matching_record and not self.match_datetime_range(record, self.options['start_date'], self.options['end_date'])): matching_record = False # Check if record matches query type criteria if (matching_record and self.options['query_type'] and not self.match_query_type(record)): matching_record = False # Search attributes values for matching pattern if (matching_record and self.regexp_pattern and not self.match_pattern(record)): matching_record = False # Store record into resulting rows (i.e., survived defined filters) if matching_record: if self.options['format'] == 'raw': self.rows.append(line) else: self.rows.append(record) def retrieve_rows(self): """Retrieve the resulting entries from the log parsing process """ return self.rows if self.rows != [] else None def _track_new_users_connection_id(self, record, name_upper): """Track CONNECT records and store information of users and associated connection IDs. """ user = record.get("USER", None) priv_user = record.get("PRIV_USER", None) # Register new connection_id (and corresponding user) if (name_upper.upper() == "CONNECT" and (user and (user in self.options['users'])) or (priv_user and (priv_user in self.options['users']))): self.connection_ids.append((user, priv_user, record.get("CONNECTION_ID"))) def match_users(self, record): """Match users. Check if the given record match the user search criteria. Returns True if the record matches one of the specified users. record[in] audit log record to check """ for con_id in self.connection_ids: if record.get('CONNECTION_ID', None) == con_id[2]: # Add user columns record['USER'] = con_id[0] record['PRIV_USER'] = con_id[1] # Add server_id column if self.header_rows: record['SERVER_ID'] = self.header_rows[0]['SERVER_ID'] return True return False @staticmethod def match_datetime_range(record, start_date, end_date): """Match date/time range. Check if the given record match the datetime range criteria. Returns True if the record matches the specified date range. record[in] audit log record to check; start_date[in] start date/time of the record (inclusive); end_date[in] end date/time of the record (inclusive); """ if (start_date and (record.get('TIMESTAMP', None) < start_date)) or \ (end_date and (end_date < record.get('TIMESTAMP', None))): # Not within datetime range return False else: return True def match_pattern(self, record): """Match REGEXP pattern. Check if the given record matches the defined pattern. Returns True if one of the record values matches the pattern. record[in] audit log record to check; """ for val in record.values(): if val and self.regexp_pattern.match(val): return True return False def match_query_type(self, record): """Match query types. Check if the given record matches one of the given query types. Returns True if the record possesses a SQL statement/command that matches one of the query types from the given list of query types. record[in] audit log record to check; """ sqltext = record.get('SQLTEXT', None) if sqltext: # Ignore (i.e., remove) comments in query. if self.regexp_comment: sqltext = re.sub(self.regexp_comment, '', sqltext) # Ignore (i.e., remove) quoted text in query. if self.regexp_quoted: sqltext = re.sub(self.regexp_quoted, '', sqltext) # Ignore (i.e., remove) names quoted with backticks in query. if self.regexp_backtick: sqltext = re.sub(self.regexp_backtick, '', sqltext) # Search query types strings inside text. sqltext = sqltext.lower() for qtype in self.match_qtypes: # Handle specific query-types to avoid false positives. if (qtype.startswith('set') and ('insert ' in sqltext or 'update ' in sqltext)): # Do not match SET in INSERT or UPDATE queries. continue if (qtype.startswith('prepare') and ('drop ' in sqltext or 'deallocate ' in sqltext)): # Do not match PREPARE in DROP or DEALLOCATE queries. continue # Check if query type is found. if qtype in sqltext: return True return False @staticmethod def match_event_type(record, event_types): """Match audit log event/record type. Check if the given record matches one of the given event types. Returns True if the record type (i.e., logged event) matches one of the types from the given list of event types. record[in] audit log record to check; event_types[in] list of matching record/event types; """ name = record.get('NAME').lower() if name in event_types: return True else: return False @staticmethod def match_status(record, status_list): """Match the record status. Check if the given record match the specified status criteria. record[in] audit log record to check; status_list[in] list of status values or intervals (representing MySQL error codes) to match; Returns True if the record status matches one of the specified values or intervals in the list. """ rec_status = record.get('STATUS', None) if rec_status: rec_status = int(rec_status) for status_val in status_list: # Check if the status value is an interval (tuple) or int if isinstance(status_val, tuple): # It is an interval; Check if it contains the record # status. if status_val[0] <= rec_status <= status_val[1]: return True else: # Directly check if the status match (is equal). if rec_status == status_val: return True return False mysql-utilities-1.6.4/mysql/utilities/common/__init__.py0000755001577100752670000000003512747670311023154 0ustar pb2usercommon"""mysql.utilities.common""" mysql-utilities-1.6.4/mysql/utilities/common/lock.py0000644001577100752670000001255312747670311022352 0ustar pb2usercommon# # Copyright (c) 2011, 2012, 2013, Oracle and/or its affiliates. All rights # reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the methods for checking consistency among two databases. """ from mysql.utilities.exception import UtilError, UtilDBError # The following are the queries needed to perform table locking. LOCK_TYPES = ['READ', 'WRITE'] _SESSION_ISOLATION_LEVEL = \ "SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ" _START_TRANSACTION = "START TRANSACTION WITH CONSISTENT SNAPSHOT" _LOCK_WARNING = "WARNING: Lock in progress. You must call unlock() " + \ "to unlock your tables." _FLUSH_TABLES_READ_LOCK = "FLUSH TABLES WITH READ LOCK" class Lock(object): """Lock """ def __init__(self, server, table_list, options=None): """Constructor Lock a list of tables based on locking type. Locking types and their behavior is as follows: - (default) use consistent read with a single transaction - lock all tables without consistent read and no transaction - no locks, no transaction, no consistent read - flush (replication only) - issue a FTWRL command server[in] Server instance of server to run locks table_list[in] list of tuples (table_name, lock_type) options[in] dictionary of options locking = [snapshot|lock-all|no-locks|flush], verbosity int silent bool rpl_mode string """ if options is None: options = {} self.locked = False self.silent = options.get('silent', False) # Determine locking type self.locking = options.get('locking', 'snapshot') self.verbosity = options.get('verbosity', 0) if self.verbosity is None: self.verbosity = 0 else: self.verbosity = int(self.verbosity) self.server = server self.table_list = table_list self.query_opts = {'fetch': False, 'commit': False} # If no locking, we're done if self.locking == 'no-locks': return elif self.locking == 'lock-all': # Check lock requests for validity table_locks = [] for tablename, locktype in table_list: if locktype.upper() not in LOCK_TYPES: raise UtilDBError("Invalid lock type '%s' for table '%s'." % (locktype, tablename)) # Build LOCK TABLE command table_locks.append("%s %s" % (tablename, locktype)) lock_str = "LOCK TABLE " lock_str += ', '.join(table_locks) if self.verbosity >= 3 and not self.silent: print '# LOCK STRING:', lock_str # Execute the lock self.server.exec_query(lock_str, self.query_opts) self.locked = True elif self.locking == 'snapshot': self.server.exec_query(_SESSION_ISOLATION_LEVEL, self.query_opts) self.server.exec_query(_START_TRANSACTION, self.query_opts) # Execute a FLUSH TABLES WITH READ LOCK for replication uses only elif self.locking == 'flush' and options.get("rpl_mode", None): if self.verbosity >= 3 and not self.silent: print "# LOCK STRING: %s" % _FLUSH_TABLES_READ_LOCK self.server.exec_query(_FLUSH_TABLES_READ_LOCK, self.query_opts) self.locked = True else: raise UtilError("Invalid locking type: '%s'." % self.locking) def __del__(self): """Destructor Returns string - warning if the lock has not been disengaged. """ if self.locked: return _LOCK_WARNING return None def unlock(self, abort=False): """Release the table lock. """ if not self.locked: return if self.verbosity >= 3 and not self.silent and \ self.locking != 'no-locks': print "# UNLOCK STRING:", # Call unlock: if self.locking in ['lock-all', 'flush']: if self.verbosity >= 3 and not self.silent: print "UNLOCK TABLES" self.server.exec_query("UNLOCK TABLES", self.query_opts) self.locked = False # Stop transaction if locking == 0 elif self.locking == 'snapshot': if not abort: if self.verbosity >= 3 and not self.silent: print "COMMIT" self.server.exec_query("COMMIT", self.query_opts) else: self.server.exec_queery("ROLLBACK", self.query_opts) if self.verbosity >= 3 and not self.silent: print "ROLLBACK" mysql-utilities-1.6.4/mysql/utilities/common/tools.py0000644001577100752670000005236212747670311022564 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains methods for working with mysql server tools. """ import inspect import os import re import sys import shlex import shutil import socket import subprocess import time try: import ctypes except ImportError: pass from mysql.utilities import (PYTHON_MIN_VERSION, PYTHON_MAX_VERSION, CONNECTOR_MIN_VERSION) from mysql.utilities.exception import UtilError def _add_basedir(search_paths, path_str): """Add a basedir and all known sub directories This method builds a list of possible paths for a basedir for locating special MySQL files like mysqld (mysqld.exe), etc. search_paths[inout] List of paths to append path_str[in] The basedir path to append """ search_paths.append(path_str) search_paths.append(os.path.join(path_str, "sql")) # for source trees search_paths.append(os.path.join(path_str, "client")) # for source trees search_paths.append(os.path.join(path_str, "share")) search_paths.append(os.path.join(path_str, "scripts")) search_paths.append(os.path.join(path_str, "bin")) search_paths.append(os.path.join(path_str, "libexec")) search_paths.append(os.path.join(path_str, "mysql")) def get_tool_path(basedir, tool, fix_ext=True, required=True, defaults_paths=None, search_PATH=False, quote=False): """Search for a MySQL tool and return the full path basedir[in] The initial basedir to search (from mysql server) tool[in] The name of the tool to find fix_ext[in] If True (default is True), add .exe if running on Windows. required[in] If True (default is True), and error will be generated and the utility aborted if the tool is not found. defaults_paths[in] Default list of paths to search for the tool. By default an empty list is assumed, i.e. []. search_PATH[in] Boolean value that indicates if the paths specified by the PATH environment variable will be used to search for the tool. By default the PATH will not be searched, i.e. search_PATH=False. quote[in] If True, the result path is surrounded with the OS quotes. Returns (string) full path to tool """ if not defaults_paths: defaults_paths = [] search_paths = [] if quote: if os.name == "posix": quote_char = "'" else: quote_char = '"' else: quote_char = '' if basedir: # Add specified basedir path to search paths _add_basedir(search_paths, basedir) if defaults_paths and len(defaults_paths): # Add specified default paths to search paths for path in defaults_paths: search_paths.append(path) else: # Add default basedir paths to search paths _add_basedir(search_paths, "/usr/local/mysql/") _add_basedir(search_paths, "/usr/sbin/") _add_basedir(search_paths, "/usr/share/") # Search in path from the PATH environment variable if search_PATH: for path in os.environ['PATH'].split(os.pathsep): search_paths.append(path) if os.name == "nt" and fix_ext: tool = tool + ".exe" # Search for the tool for path in search_paths: norm_path = os.path.normpath(path) if os.path.isdir(norm_path): toolpath = os.path.join(norm_path, tool) if os.path.isfile(toolpath): return r"%s%s%s" % (quote_char, toolpath, quote_char) else: if tool == "mysqld.exe": toolpath = os.path.join(norm_path, "mysqld-nt.exe") if os.path.isfile(toolpath): return r"%s%s%s" % (quote_char, toolpath, quote_char) if required: raise UtilError("Cannot find location of %s." % tool) return None def delete_directory(path): """Remove a directory (folder) and its contents. path[in] target directory """ if os.path.exists(path): # It can take up to 10 seconds for Windows to 'release' a directory # once a process has terminated. We wait... if os.name == "nt": stop = 10 i = 1 while i < stop and os.path.exists(path): shutil.rmtree(path, True) time.sleep(1) i += 1 else: shutil.rmtree(path, True) def estimate_free_space(path, unit_multiple=2): """Estimated free space for the given path. Calculates free space for the given path, returning the value on the size given by the unit_multiple. path[in] the path to calculate the free space for. unit_multiple[in] the unit size given as a multiple. Accepts int values > to zero. Size unit_multiple bytes 0 Kilobytes 1 Megabytes 2 Gigabytes 3 and so on... Returns folder/drive free space (in bytes) """ unit_size = 1024 ** unit_multiple if os.name == 'nt': free_bytes = ctypes.c_ulonglong(0) ctypes.windll.kernel32.GetDiskFreeSpaceExW(ctypes.c_wchar_p(path), None, None, ctypes.pointer(free_bytes)) return free_bytes.value / unit_size else: st = os.statvfs(path) # pylint: disable=E1101 return st.f_bavail * st.f_frsize / unit_size def execute_script(run_cmd, filename=None, options=None, verbosity=False): """Execute a script. This method spawns a subprocess to execute a script. If a file is specified, it will direct output to that file else it will suppress all output from the script. run_cmd[in] command/script to execute filename[in] file path name to file, os.stdout, etc. Default is None (do not log/write output) options[in] arguments for script Default is no arguments ([]) verbosity[in] show result of script Default is False Returns int - result from process execution """ if options is None: options = [] if verbosity: f_out = sys.stdout else: if not filename: filename = os.devnull f_out = open(filename, 'w') is_posix = (os.name == "posix") command = shlex.split(run_cmd, posix=is_posix) if options: command.extend([str(opt) for opt in options]) if verbosity: print("# SCRIPT EXECUTED: {0}".format(command)) try: proc = subprocess.Popen(command, shell=False, stdout=f_out, stderr=f_out) except OSError: _, err, _ = sys.exc_info() raise UtilError(str(err)) ret_val = proc.wait() if not verbosity: f_out.close() return ret_val def ping_host(host, timeout): """Execute 'ping' against host to see if it is alive. host[in] hostname or IP to ping timeout[in] timeout in seconds to wait returns bool - True = host is reachable via ping """ if sys.platform == "darwin": run_cmd = "ping -o -t %s %s" % (timeout, host) elif os.name == "posix": run_cmd = "ping -w %s %s" % (timeout, host) else: # must be windows run_cmd = "ping -n %s %s" % (timeout, host) ret_val = execute_script(run_cmd) return (ret_val == 0) def parse_mysqld_version(vers_str): pattern = r"mysqld(?:\.exe)?\s+Ver\s+(\d+\.\d+\.\S+)\s" match = re.search(pattern, vers_str) if not match: return None version = match.group(1) num_dots = vers_str.count('.') try: # get the version digits. If more than 2, we get first 3 parts if num_dots == 2: maj_ver, min_ver, dev = version.split(".", 2) else: maj_ver, min_ver, dev, __ = version.split(".", 3) rel = dev.split("-", 1) return (maj_ver, min_ver, rel[0]) except: return None def get_mysqld_version(mysqld_path): """Return the version number for a mysqld executable. mysqld_path[in] location of the mysqld executable Returns tuple - (major, minor, release), or None if error """ out = open("version_check", 'w') proc = subprocess.Popen("%s --version" % mysqld_path, stdout=out, stderr=out, shell=True) proc.wait() out.close() out = open("version_check", 'r') line = None for line in out.readlines(): if "Ver" in line: break out.close() try: os.unlink('version_check') except: pass if line is None: return None return parse_mysqld_version(line) def show_file_statistics(file_name, wild=False, out_format="GRID"): """Show file statistics for file name specified file_name[in] target file name and path wild[in] if True, get file statistics for all files with prefix of file_name. Default is False out_format[in] output format to print file statistics. Default is GRID. """ def _get_file_stats(path, file_name): """Return file stats """ stats = os.stat(os.path.join(path, file_name)) return ((file_name, stats.st_size, time.ctime(stats.st_ctime), time.ctime(stats.st_mtime))) columns = ["File", "Size", "Created", "Last Modified"] rows = [] path, filename = os.path.split(file_name) if wild: for _, _, files in os.walk(path): for f in files: if f.startswith(filename): rows.append(_get_file_stats(path, f)) else: rows.append(_get_file_stats(path, filename)) # Local import is needed because of Python compability issues from mysql.utilities.common.format import print_list print_list(sys.stdout, out_format, columns, rows) def remote_copy(filepath, user, host, local_path, verbosity=0): """Copy a file from a remote machine to the localhost. filepath[in] The full path and file name of the file on the remote machine user[in] Remote login local_path[in] The path to where the file is to be copie Returns bool - True = succes, False = failure or exception """ if os.name == "posix": # use scp run_cmd = "scp %s@%s:%s %s" % (user, host, filepath, local_path) if verbosity > 1: print("# Command =%s" % run_cmd) print("# Copying file from %s:%s to %s:" % (host, filepath, local_path)) proc = subprocess.Popen(run_cmd, shell=True) proc.wait() else: print("Remote copy not supported. Please use UNC paths and omit " "the --remote-login option to use a local copy operation.") return True def check_python_version(min_version=PYTHON_MIN_VERSION, max_version=PYTHON_MAX_VERSION, raise_exception_on_fail=False, name=None, print_on_fail=True, exit_on_fail=True, return_error_msg=False): """Check the Python version compatibility. By default this method uses constants to define the minimum and maximum Python versions required. It's possible to override this by passing new values on ``min_version`` and ``max_version`` parameters. It will run a ``sys.exit`` or raise a ``UtilError`` if the version of Python detected it not compatible. min_version[in] Tuple with the minimum Python version required (inclusive). max_version[in] Tuple with the maximum Python version required (exclusive). raise_exception_on_fail[in] Boolean, it will raise a ``UtilError`` if True and Python detected is not compatible. name[in] String for a custom name, if not provided will get the module name from where this function was called. print_on_fail[in] If True, print error else do not print error on failure. exit_on_fail[in] If True, issue exit() else do not exit() on failure. return_error_msg[in] If True, and is not compatible returns (result, error_msg) tuple. """ # Only use the fields: major, minor and micro sys_version = sys.version_info[:3] # Test min version compatibility is_compat = min_version <= sys_version # Test max version compatibility if it's defined if is_compat and max_version: is_compat = sys_version < max_version if not is_compat: if not name: # Get the utility name by finding the module # name from where this function was called frm = inspect.stack()[1] mod = inspect.getmodule(frm[0]) mod_name = os.path.splitext( os.path.basename(mod.__file__))[0] name = '%s utility' % mod_name # Build the error message if max_version: max_version_error_msg = 'or higher and lower than %s' % \ '.'.join([str(el) for el in max_version]) else: max_version_error_msg = 'or higher' error_msg = ( 'The %(name)s requires Python version %(min_version)s ' '%(max_version_error_msg)s. The version of Python detected was ' '%(sys_version)s. You may need to install or redirect the ' 'execution of this utility to an environment that includes a ' 'compatible Python version.' ) % { 'name': name, 'sys_version': '.'.join([str(el) for el in sys_version]), 'min_version': '.'.join([str(el) for el in min_version]), 'max_version_error_msg': max_version_error_msg } if raise_exception_on_fail: raise UtilError(error_msg) if print_on_fail: print('ERROR: %s' % error_msg) if exit_on_fail: sys.exit(1) if return_error_msg: return is_compat, error_msg return is_compat def check_port_in_use(host, port): """Check to see if port is in use. host[in] Hostname or IP to check port[in] Port number to check Returns bool - True = port is available, False is not available """ try: sock = socket.create_connection((host, port)) except socket.error: return True sock.close() return False def requires_encoding(orig_str): r"""Check to see if a string requires encoding This method will check to see if a string requires encoding to be used as a MySQL file name (r"[\w$]*"). orig_str[in] original string Returns bool - True = requires encoding, False = does not require encoding """ ok_chars = re.compile(r"[\w$]*") parts = ok_chars.findall(orig_str) return len(parts) > 2 and parts[1].strip() == '' def encode(orig_str): r"""Encode a string containing non-MySQL observed characters This method will take a string containing characters other than those recognized by MySQL (r"[\w$]*") and covert them to embedded ascii values. For example, "this.has.periods" becomes "this@002ehas@00e2periods" orig_str[in] original string Returns string - encoded string or original string """ # First, find the parts that match valid characters ok_chars = re.compile(r"[\w$]*") parts = ok_chars.findall(orig_str) # Now find each part that does not match the list of valid characters # Save the good parts i = 0 encode_parts = [] good_parts = [] for part in parts: if not len(part): continue good_parts.append(part) if i == 0: i = len(part) else: j = orig_str[i:].find(part) encode_parts.append(orig_str[i:i + j]) i += len(part) + j # Next, convert the non-valid parts to the form @NNNN (hex) encoded_parts = [] for part in encode_parts: new_part = "".join(["@%04x" % ord(c) for c in part]) encoded_parts.append(new_part) # Take the good parts and the encoded parts and reform the string i = 0 new_parts = [] for part in good_parts[:len(good_parts) - 1]: new_parts.append(part) new_parts.append(encoded_parts[i]) i += 1 new_parts.append(good_parts[len(good_parts) - 1]) # Return the new string return "".join(new_parts) def requires_decoding(orig_str): """Check to if a string required decoding This method will check to see if a string requires decoding to be used as a filename (has @NNNN entries) orig_str[in] original string Returns bool - True = requires decoding, False = does not require decoding """ return '@' in orig_str def decode(orig_str): r"""Decode a string containing @NNNN entries This method will take a string containing characters other than those recognized by MySQL (r"[\w$]*") and covert them to character values. For example, "this@002ehas@00e2periods" becomes "this.has.periods". orig_str[in] original string Returns string - decoded string or original string """ parts = orig_str.split('@') if len(parts) == 1: return orig_str new_parts = [parts[0]] for part in parts[1:]: # take first four positions and convert to ascii new_parts.append(chr(int(part[0:4], 16))) new_parts.append(part[4:]) return "".join(new_parts) def check_connector_python(print_error=True, min_version=CONNECTOR_MIN_VERSION): """Check to see if Connector Python is installed and accessible and meets minimum required version. By default this method uses constants to define the minimum C/Python version required. It's possible to override this by passing a new value to ``min_version`` parameter. print_error[in] If True, print error else do not print error on failure. min_version[in] Tuple with the minimum C/Python version required (inclusive). """ is_compatible = True try: import mysql.connector # pylint: disable=W0612 except ImportError: if print_error: print("ERROR: The MySQL Connector/Python module was not found. " "MySQL Utilities requires the connector to be installed. " "Please check your paths or download and install the " "Connector/Python from http://dev.mysql.com.") return False else: try: sys_version = mysql.connector.version.VERSION[:3] except AttributeError: is_compatible = False if is_compatible and sys_version >= min_version: return True else: if print_error: print("ERROR: The MYSQL Connector/Python module was found " "but it is either not properly installed or it is an " "old version. MySQL Utilities requires Connector/Python " "version > '{0}'. Download and install Connector/Python " "from http://dev.mysql.com.".format(min_version)) return False def print_elapsed_time(start_time): """Print the elapsed time to stdout (screen) start_time[in] The starting time of the test """ stop_time = time.time() display_time = stop_time - start_time print("Time: {0:.2f} sec\n".format(display_time)) def join_and_build_str(list_of_strings, sep=', ', last_sep='and'): """Buils and returns a string from a list of elems. list_of_strings[in] the list of strings that will be joined into a single string. sep[in] the separator that will be used to group all strings except the last one. last_sep[in] the separator that is used in last place """ if list_of_strings: if len(list_of_strings) > 1: res_str = "{0} {1} {2}".format( sep.join(list_of_strings[:-1]), last_sep, list_of_strings[-1]) else: # list has a single elem res_str = list_of_strings[0] else: # if list_of_str is empty, return empty string res_str = "" return res_str mysql-utilities-1.6.4/mysql/utilities/common/topology.py0000644001577100752670000030433512747670311023300 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains abstractions of MySQL replication functionality. """ import sys import logging import time import operator from multiprocessing.pool import ThreadPool from mysql.utilities.exception import FormatError, UtilError, UtilRplError from mysql.utilities.common.lock import Lock from mysql.utilities.common.my_print_defaults import MyDefaultsReader from mysql.utilities.common.ip_parser import parse_connection from mysql.utilities.common.options import parse_user_password from mysql.utilities.common.replication import Master, Slave, Replication from mysql.utilities.common.tools import execute_script from mysql.utilities.common.format import print_list from mysql.utilities.common.user import User from mysql.utilities.common.server import (get_server_state, get_server, get_connection_dictionary, log_server_version) from mysql.utilities.common.messages import USER_PASSWORD_FORMAT _HEALTH_COLS = ["host", "port", "role", "state", "gtid_mode", "health"] _HEALTH_DETAIL_COLS = ["version", "master_log_file", "master_log_pos", "IO_Thread", "SQL_Thread", "Secs_Behind", "Remaining_Delay", "IO_Error_Num", "IO_Error", "SQL_Error_Num", "SQL_Error", "Trans_Behind"] _GTID_EXECUTED = "SELECT @@GLOBAL.GTID_EXECUTED" _GTID_WAIT = "SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS('%s', %s)" _GTID_SUBTRACT_TO_EXECUTED = ("SELECT GTID_SUBTRACT('{0}', " "@@GLOBAL.GTID_EXECUTED)") # TODO: Remove the use of PASSWORD(), depercated from 5.7.6. _UPDATE_RPL_USER_QUERY = ("UPDATE mysql.user " "SET password = PASSWORD('{passwd}')" "where user ='{user}'") # Query for server versions >= 5.7.6. _UPDATE_RPL_USER_QUERY_5_7_6 = ( "UPDATE mysql.user SET authentication_string = PASSWORD('{passwd}') " "WHERE user = '{user}'") _SELECT_RPL_USER_PASS_QUERY = ("SELECT user, host, grant_priv, password, " "Repl_slave_priv FROM mysql.user " "WHERE user ='{user}' AND host ='{host}'") # Query for server versions >= 5.7.6. _SELECT_RPL_USER_PASS_QUERY_5_7_6 = ( "SELECT user, host, grant_priv, authentication_string, " "Repl_slave_priv FROM mysql.user WHERE user ='{user}' AND host ='{host}'") def parse_topology_connections(options, parse_candidates=True): """Parse the --master, --slaves, and --candidates options This method returns a tuple with server connection dictionaries for the master, slaves, and candidates lists. If no master, will return (None, ...) for master element. If no slaves, will return (..., [], ...) for slaves element. If no canidates, will return (..., ..., []) for canidates element. Will raise error if cannot parse connection options. options[in] options from parser Returns tuple - (master, slaves, candidates) dictionaries """ try: timeout = options.conn_timeout except: timeout = None if timeout and options.verbosity > 2: print("Note: running with --connection-timeout={0}".format(timeout)) # Create a basic configuration reader, without looking for the tool # my_print_defaults to avoid raising exceptions. This is used for # optimization purposes, to reuse data and avoid repeating the execution of # some methods in the parse_connection method (e.g. searching for # my_print_defaults). config_reader = MyDefaultsReader(options, False) if options.master: try: master_val = parse_connection(options.master, config_reader, options) # Add connection timeout if present in options if timeout: master_val['connection_timeout'] = timeout except FormatError as err: msg = ("Master connection values invalid or cannot be parsed: %s " "(%s)." % (options.master, err)) raise UtilRplError(msg) except UtilError as err: msg = ("Master connection values invalid or cannot be parsed: %s " "(using login-path authentication: %s)" % (options.master, err.errmsg)) raise UtilRplError(msg) else: master_val = None slaves_val = [] if options.slaves: slaves = options.slaves.split(",") for slave in slaves: try: s_values = parse_connection(slave, config_reader, options) # Add connection timeout if present in options if timeout: s_values['connection_timeout'] = timeout slaves_val.append(s_values) except FormatError as err: msg = ("Slave connection values invalid or cannot be parsed: " "%s (%s)" % (slave, err)) raise UtilRplError(msg) except UtilError as err: msg = ("Slave connection values invalid or cannot be parsed: " "%s (%s)" % (slave, err.errmsg)) raise UtilRplError(msg) candidates_val = [] if parse_candidates and options.candidates: candidates = options.candidates.split(",") for slave in candidates: try: s_values = parse_connection(slave, config_reader, options) # Add connection timeout if present in options if timeout: s_values['connection_timeout'] = timeout candidates_val.append(s_values) except FormatError as err: msg = "Candidate connection values invalid or " + \ "cannot be parsed: %s (%s)" % (slave, err) raise UtilRplError(msg) except UtilError as err: msg = ("Candidate connection values invalid or cannot be " "parsed: %s (%s)" % (slave, err.errmsg)) raise UtilRplError(msg) return (master_val, slaves_val, candidates_val) class Topology(Replication): """The Topology class supports administrative operations for an existing master-to-many slave topology. It has the following capabilities: - determine the health of the topology - discover slaves connected to the master provided they have --report-host and --report-port specified - switchover from master to a candidate slave - demote the master to a slave in the topology - perform best slave election - failover to a specific slave or best of slaves available Notes: - the switchover and demote methods work with versions prior to and after 5.6.5. - failover and best slave election require version 5.6.5 and later and GTID_MODE=ON. """ def __init__(self, master_vals, slave_vals, options=None, skip_conn_err=False): """Constructor The slaves parameter requires a dictionary in the form: master_vals[in] master server connection dictionary slave_vals[in] list of slave server connection dictionaries options[in] options dictionary verbose print extra data during operations (optional) Default = False ping maximum number of seconds to ping Default = 3 max_delay maximum delay in seconds slave and be behind master and still be 'Ok'. Default = 0 max_position maximum position slave can be behind master's binlog and still be 'Ok'. Default = 0 skip_conn_err[in] if True, do not fail on connection failure Default = True """ super(Topology, self).__init__(master_vals, slave_vals, options or {}) # Get options needed self.options = options or {} self.verbosity = options.get("verbosity", 0) self.verbose = self.verbosity > 0 self.quiet = self.options.get("quiet", False) self.pingtime = self.options.get("ping", 3) self.max_delay = self.options.get("max_delay", 0) self.max_pos = self.options.get("max_position", 0) self.force = self.options.get("force", False) self.pedantic = self.options.get("pedantic", False) self.before_script = self.options.get("before", None) self.after_script = self.options.get("after", None) self.timeout = int(self.options.get("timeout", 300)) self.logging = self.options.get("logging", False) self.rpl_user = self.options.get("rpl_user", None) self.script_threshold = self.options.get("script_threshold", None) self.master_vals = None # Attempt to connect to all servers self.master, self.slaves = self._connect_to_servers(master_vals, slave_vals, self.options, skip_conn_err) self.discover_slaves(output_log=True) def _report(self, message, level=logging.INFO, print_msg=True): """Log message if logging is on This method will log the message presented if the log is turned on. Specifically, if options['log_file'] is not None. It will also print the message to stdout. message[in] message to be printed level[in] level of message to log. Default = INFO print_msg[in] if True, print the message to stdout. Default = True """ # First, print the message. if print_msg and not self.quiet: print message # Now log message if logging turned on if self.logging: logging.log(int(level), message.strip("#").strip(' ')) def _connect_to_servers(self, master_vals, slave_vals, options, skip_conn_err=True): """Connect to the master and one or more slaves This method will attempt to connect to the master and slaves provided. For slaves, if the --force option is specified, it will skip slaves that cannot be reached setting the slave dictionary to None in the list instead of a Slave class instance. The dictionary of the list of slaves returns is as follows. slave_dict = { 'host' : # host name for slave 'port' : # port for slave 'instance' : Slave class instance or None if cannot connect } master_vals[in] master server connection dictionary slave_vals[in] list of slave server connection dictionaries options[in] options dictionary verbose print extra data during operations (optional) Default = False ping maximum number of seconds to ping Default = 3 max_delay maximum delay in seconds slave and be behind master and still be 'Ok'. Default = 0 max_position maximum position slave can be behind master's binlog and still be 'Ok'. Default = 0 skip_conn_err[in] if True, do not fail on connection failure Default = True Returns tuple - master instance, list of dictionary slave instances """ master = None slaves = [] # Set verbose value. verbose = self.options.get("verbosity", 0) > 0 # attempt to connect to the master if master_vals: master = get_server('master', master_vals, True, verbose=verbose) if self.logging: log_server_version(master) for slave_val in slave_vals: host = slave_val['host'] port = slave_val['port'] try: slave = get_server('slave', slave_val, True, verbose=verbose) if self.logging: log_server_version(slave) except: msg = "Cannot connect to slave %s:%s as user '%s'." % \ (host, port, slave_val['user']) if skip_conn_err: if self.verbose: self._report("# ERROR: %s" % msg, logging.ERROR) slave = None else: raise UtilRplError(msg) slave_dict = { 'host': host, # host name for slave 'port': port, # port for slave 'instance': slave, # Slave class instance or None } slaves.append(slave_dict) return (master, slaves) def _is_connected(self): """Check to see if all servers are connected. Method will skip any slaves that do not have an instance (offline) but requires the master be instantiated and connected. The method will also skip the checks altogether if self.force is specified. Returns bool - True if all connected or self.force is specified. """ # Skip check if --force specified. if self.force: return True if self.master is None or not self.master.is_alive(): return False for slave_dict in self.slaves: slave = slave_dict['instance'] if slave is not None and not slave.is_alive(): return False return True def remove_discovered_slaves(self): """Reset the slaves list to the original list at instantiation This method is used in conjunction with discover_slaves to remove any discovered slave from the slaves list. Once this is done, a call to discover slaves will rediscover the slaves. This is helpful for when failover occurs and a discovered slave is used for the new master. """ new_list = [] for slave_dict in self.slaves: if not slave_dict.get("discovered", False): new_list.append(slave_dict) self.slaves = new_list def check_master_info_type(self, repo="TABLE"): """Check all slaves for master_info_repository=repo repo[in] value for master info = "TABLE" or "FILE" Default is "TABLE" Returns bool - True if master_info_repository == repo """ for slave_dict in self.slaves: slave = slave_dict['instance'] if slave is not None: res = slave.show_server_variable("master_info_repository") if not res or res[0][1].upper() != repo.upper(): return False return True def discover_slaves(self, skip_conn_err=True, output_log=False): """Discover slaves connected to the master skip_conn_err[in] Skip connection errors to the slaves (i.e. log the errors but do not raise an exception), by default True. output_log[in] Output the logged information (i.e. print the information of discovered slave to the output), by default False. Returns bool - True if new slaves found """ # See if the user wants us to discover slaves. discover = self.options.get("discover", None) if not discover or not self.master: return # Get user and password (support login-path) try: user, password = parse_user_password(discover, options=self.options) except FormatError: raise UtilError (USER_PASSWORD_FORMAT.format("--discover-slaves")) # Find discovered slaves new_slaves_found = False self._report("# Discovering slaves for master at " "{0}:{1}".format(self.master.host, self.master.port)) discovered_slaves = self.master.get_slaves(user, password) for slave in discovered_slaves: if "]" in slave: host, port = slave.split("]:") host = "{0}]".format(host) else: host, port = slave.split(":") msg = "Discovering slave at {0}:{1}".format(host, port) self._report(msg, logging.INFO, False) if output_log: print("# {0}".format(msg)) # Skip hosts that are not registered properly if host == 'unknown host': continue # Check to see if the slave is already in the list else: found = False # Eliminate if already a slave for slave_dict in self.slaves: if slave_dict['host'] == host and \ int(slave_dict['port']) == int(port): found = True break if not found: # Now we must attempt to connect to the slave. conn_dict = { 'conn_info': {'user': user, 'passwd': password, 'host': host, 'port': port, 'socket': None, 'ssl_ca': self.master.ssl_ca, 'ssl_cert': self.master.ssl_cert, 'ssl_key': self.master.ssl_key, 'ssl': self.master.ssl}, 'role': slave, 'verbose': self.options.get("verbosity", 0) > 0, } slave_conn = Slave(conn_dict) try: slave_conn.connect() # Skip discovered slaves that are not connected # to the master (i.e. IO thread is not running) if slave_conn.is_connected(): self.slaves.append({'host': host, 'port': port, 'instance': slave_conn, 'discovered': True}) msg = "Found slave: {0}:{1}".format(host, port) self._report(msg, logging.INFO, False) if output_log: print("# {0}".format(msg)) if self.logging: log_server_version(slave_conn) new_slaves_found = True else: msg = ("Slave skipped (IO not running): " "{0}:{1}").format(host, port) self._report(msg, logging.WARN, False) if output_log: print("# {0}".format(msg)) except UtilError, e: msg = ("Cannot connect to slave {0}:{1} as user " "'{2}'.").format(host, port, user) if skip_conn_err: msg = "{0} {1}".format(msg, e.errmsg) self._report(msg, logging.WARN, False) if output_log: print("# {0}".format(msg)) else: raise UtilRplError(msg) return new_slaves_found def _get_server_gtid_data(self, server, role): """Retrieve the GTID information from the server. This method builds a tuple of three lists corresponding to the three GTID lists (executed, purged, owned) retrievable via the global variables. It generates lists suitable for format and printing. role[in] role of the server (used for report generation) Returns tuple - (executed list, purged list, owned list) """ executed = [] purged = [] owned = [] if server.supports_gtid() == "NO": return (executed, purged, owned) try: gtids = server.get_gtid_status() except UtilError, e: self._report("# ERROR retrieving GTID information: %s" % e.errmsg, logging.ERROR) return None for gtid in gtids[0]: for row in gtid.split("\n"): if len(row): executed.append((server.host, server.port, role, row.strip(","))) for gtid in gtids[1]: for row in gtid.split("\n"): if len(row): purged.append((server.host, server.port, role, row.strip(","))) for gtid in gtids[2]: for row in gtid.split("\n"): if len(row): owned.append((server.host, server.port, role, row.strip(","))) return (executed, purged, owned) def _check_switchover_prerequisites(self, candidate=None): """Check prerequisites for performing switchover This method checks the prerequisites for performing a switch from a master to a candidate slave. candidate[in] if supplied, use this candidate instead of the candidate supplied by the user. Must be instance of Master class. Returns bool - True if success, raises error if not """ if candidate is None: candidate = self.options.get("candidate", None) assert (candidate is not None), "A candidate server is required." assert (type(candidate) == Master), \ "A candidate server must be a Master class instance." # If master has GTID=ON, ensure all servers have GTID=ON gtid_enabled = self.master.supports_gtid() == "ON" if gtid_enabled: gtid_ok = True for slave_dict in self.slaves: slave = slave_dict['instance'] # skip dead or zombie slaves if not slave or not slave.is_alive(): continue if slave.supports_gtid() != "ON": gtid_ok = False if not gtid_ok: msg = "GTIDs are enabled on the master but not " + \ "on all of the slaves." self._report(msg, logging.CRITICAL) raise UtilRplError(msg) elif self.verbose: self._report("# GTID_MODE=ON is set for all servers.") # Need Slave class instance to check master and replication user slave = self._change_role(candidate) # Check eligibility candidate_ok = self._check_candidate_eligibility(slave.host, slave.port, slave) if not candidate_ok[0]: # Create replication user if --force is specified. if self.force and candidate_ok[1] == "RPL_USER": user, passwd = slave.get_rpl_user() res = candidate.create_rpl_user(slave.host, slave.port, user, passwd, self.ssl) if not res[0]: print("# ERROR: {0}".format(res[1])) self._report(res[1], logging.CRITICAL, False) else: msg = candidate_ok[2] self._report(msg, logging.CRITICAL) raise UtilRplError(msg) return True def _get_rpl_user(self, server): """Get the replication user This method returns the user and password for the replication user as read from the Slave class. Returns tuple - user, password """ # Get replication user from server if rpl_user not specified if self.rpl_user is None: slave = self._change_role(server) user, passwd = slave.get_rpl_user() return (user, passwd) # Get user and password (support login-path) try: user, passwd = parse_user_password(self.rpl_user, options=self.options) except FormatError: raise UtilError (USER_PASSWORD_FORMAT.format("--rpl-user")) return (user, passwd) def run_script(self, script, quiet, options=None): """Run an external script This method executes an external script. Result is checked for success (res == 0). If the user specified a threshold and the threshold is exceeded, an error is raised. script[in] script to execute quiet[in] if True, do not print messages options[in] options for script Default is none (no options) Returns bool - True = success """ if options is None: options = [] if script is None: return self._report("# Spawning external script.") res = execute_script(script, None, options, self.verbose) if self.script_threshold and res >= int(self.script_threshold): raise UtilRplError("External script '{0}' failed. Result = {1}.\n" "Specified threshold exceeded. Operation abort" "ed.\nWARNING: The operation did not complete." " Depending on when the external script was " "called, you should check the topology " "for inconsistencies.".format(script, res)) if res == 0: self._report("# Script completed Ok.") elif not quiet: self._report("ERROR: %s Script failed. Result = %s" % (script, res), logging.ERROR) def _check_filters(self, master, slave): """Check filters to ensure they are compatible with the master. This method compares the binlog_do_db with the replicate_do_db and the binlog_ignore_db with the replicate_ignore_db on the master and slave to ensure the candidate slave is not filtering out different databases than the master. master[in] the Master class instance of the master slave[in] the Slave class instance of the slave Returns bool - True = filters agree """ m_filter = master.get_binlog_exceptions() s_filter = slave.get_binlog_exceptions() failed = False if len(m_filter) != len(s_filter): failed = True elif len(m_filter) == 0: return True elif m_filter[0][1] != s_filter[0][1] or \ m_filter[0][2] != s_filter[0][2]: failed = True if failed: if self.verbose and not self.quiet: fmt = self.options.get("format", "GRID") rows = [] if len(m_filter) == 0: rows.append(('MASTER', '', '')) else: rows.append(m_filter[0]) if len(s_filter) == 0: rows.append(('SLAVE', '', '')) else: rows.append(s_filter[0]) cols = ["role", "*_do_db", "*_ignore_db"] self._report("# Filter Check Failed.", logging.ERROR) print_list(sys.stdout, fmt, cols, rows) return False return True def _check_candidate_eligibility(self, host, port, slave, check_master=True, quiet=False): """Perform sanity checks for slave promotion This method checks the slave candidate to ensure it meets the requirements as follows. Check Name Description ----------- -------------------------------------------------- CONNECTED slave is connected to the master GTID slave has GTID_MODE = ON if master has GTID = ON (GTID only) BEHIND slave is not behind master (non-GTID only) FILTER slave's filters match the master RPL_USER slave has rpl user defined BINLOG slave must have binary logging enabled host[in] host name for the slave (used for errors) port[in] port for the slave (used for errors) slave[in] Slave class instance of candidate check_master[in] if True, check that slave is connected to the master quiet[in] if True, do not print messages even if verbosity > 0 Returns tuple (bool, check_name, string) - (True, "", "") = candidate is viable, (False, check_name, error_message) = candidate is not viable """ assert (slave is not None), "No Slave instance for eligibility check." gtid_enabled = slave.supports_gtid() == "ON" # Is slave connected to master? if self.verbose and not quiet: self._report("# Checking eligibility of slave %s:%s for " "candidate." % (host, port)) if check_master: msg = "# Slave connected to master ... %s" if not slave.is_alive(): if self.verbose and not quiet: self._report(msg % "FAIL", logging.WARN) return (False, "CONNECTED", "Connection to slave server lost.") if not slave.is_configured_for_master(self.master): if self.verbose and not quiet: self._report(msg % "FAIL", logging.WARN) return (False, "CONNECTED", "Candidate is not connected to the correct master.") if self.verbose and not quiet: self._report(msg % "Ok") # If GTID is active on master, ensure slave is on too. if gtid_enabled: msg = "# GTID_MODE=ON ... %s" if slave.supports_gtid() != "ON": if self.verbose and not quiet: self._report(msg % "FAIL", logging.WARN) return (False, "GTID", "Slave does not have GTID support enabled.") if self.verbose and not quiet: self._report(msg % "Ok") # Check for slave behind master if not gtid_enabled and check_master: msg = "# Slave not behind master ... %s" rpl = Replication(self.master, slave, self.options) errors = rpl.check_slave_delay() if errors != []: if self.verbose and not quiet: self._report(msg % "FAIL", logging.WARN) return (False, "BEHIND", " ".join(errors)) if self.verbose and not quiet: self._report(msg % "Ok") # Check filters unless force is on. if not self.force and check_master: msg = "# Logging filters agree ... %s" if not self._check_filters(self.master, slave): if self.verbose and not quiet: self._report(msg % "FAIL", logging.WARN) return (False, "FILTERS", "Master and slave filters differ.") elif self.verbose and not quiet: self._report(msg % "Ok") # If no GTIDs, we need binary logging enabled on candidate. if not gtid_enabled: msg = "# Binary logging turned on ... %s" if not slave.binlog_enabled(): if self.verbose and not quiet: self._report(msg % "FAIL", logging.WARN) return (False, "BINLOG", "Binary logging is not enabled on the candidate.") if self.verbose and not quiet: self._report(msg % "Ok") # Check replication user - must exist with correct privileges try: user, _ = slave.get_rpl_user() except UtilError: if not self.rpl_user: raise # Get user and password (support login-path) try: user, _ = parse_user_password(self.rpl_user) except FormatError: raise UtilError (USER_PASSWORD_FORMAT.format("--rpl-user")) # Make new master forget was a slave using slave methods s_candidate = self._change_role(slave, slave=False) res = s_candidate.get_rpl_users() l = len(res) user, host, _ = res[l - 1] # raise msg = "# Replication user exists ... %s" if user is None or slave.check_rpl_user(user, slave.host) != []: if not self.force: if self.verbose and not quiet: self._report(msg % "FAIL", logging.WARN) return (False, "RPL_USER", "Candidate slave is missing replication user.") else: self._report("Replication user not found but --force used.", logging.WARN) elif self.verbose and not quiet: self._report(msg % "Ok") return (True, "", "") def read_all_retrieved_gtids(self, slave): """Ensure any GTIDS in relay log are read This method iterates over all slaves ensuring any events read from the master but not executed (read) from the relay log are read. This step is necessary for failover to ensure all transactions are applied to all slaves before the new master is selected. slave[in] Server instance of the slave """ # skip dead or zombie slaves if slave is None or not slave.is_alive(): return gtids = slave.get_retrieved_gtid_set() if gtids: if self.verbose and not self.quiet: self._report("# Reading events in relay log for slave " "%s:%s" % (slave.host, slave.port)) try: slave.exec_query(_GTID_WAIT % (gtids.strip(','), self.timeout)) except UtilRplError as err: raise UtilRplError("Error executing %s: %s" % ((_GTID_WAIT % (gtids.strip(','), self.timeout)), err.errmsg)) def _has_missing_transactions(self, candidate, slave): """Determine if there are transactions on the slave not on candidate This method uses the function gtid_subset() to determine if there are GTIDs (transactions) on the slave that are not on the candidate. Return code fopr query should be 0 when there are missing transactions, 1 if not, and -1 if there is a non-numeric result code generated. candidate[in] Server instance of candidate (new master) slave[in] Server instance of slave to check Returns boolean - True if there are transactions else False """ slave_exec_gtids = slave.get_executed_gtid_set() slave_retrieved_gtids = slave.get_retrieved_gtid_set() cand_slave = self._change_role(candidate) candidate_exec_gtids = cand_slave.get_executed_gtid_set() slave_gtids = ",".join([slave_exec_gtids.strip(","), slave_retrieved_gtids.strip(",")]) res = slave.exec_query("SELECT gtid_subset('%s', '%s')" % (slave_gtids, candidate_exec_gtids.strip(","))) if res and res[0][0].isdigit(): result_code = int(res[0][0]) else: result_code = -1 if self.verbose and not self.quiet: if result_code != 1: self._report("# Missing transactions found on %s:%s. " "SELECT gtid_subset() = %s" % (slave.host, slave.port, result_code)) else: self._report("# No missing transactions found on %s:%s. " "Skipping connection of candidate as slave." % (slave.host, slave.port)) return result_code != 1 def _prepare_candidate_for_failover(self, candidate, user, passwd=""): """Prepare candidate slave for slave promotion (in failover) This method uses the candidate slave specified and connects it to each slave in the topology performing a GTID_SUBSET query to wait for the candidate (acting as a slave) to catch up. This ensures the candidate is now the 'best' or 'most up-to-date' slave in the topology. Method works only for GTID-enabled candidate servers. candidate[in] Slave class instance of candidate user[in] replication user passwd[in] replication user password Returns bool - True if successful, raises exception if failure and forst is False """ assert (candidate is not None), "Candidate must be a Slave instance." if candidate.supports_gtid() != "ON": msg = "Candidate does not have GTID turned on or " + \ "does not support GTIDs." self._report(msg, logging.CRITICAL) raise UtilRplError(msg) lock_options = { 'locking': 'flush', 'verbosity': 3 if self.verbose else self.verbosity, 'silent': self.quiet, 'rpl_mode': "master", } hostport = "%s:%s" % (candidate.host, candidate.port) for slave_dict in self.slaves: s_host = slave_dict['host'] s_port = slave_dict['port'] temp_master = slave_dict['instance'] # skip dead or zombie slaves if temp_master is None or not temp_master.is_alive(): continue # Gather retrieved_gtid_set to execute all events on slaves still # in the slave's relay log self.read_all_retrieved_gtids(temp_master) # Sanity check: ensure candidate and slave are not the same. if candidate.is_alias(s_host) and \ int(s_port) == int(candidate.port): continue # Check for missing transactions. No need to connect to slave if # there are no transactions (GTIDs) to retrieve if not self._has_missing_transactions(candidate, temp_master): continue try: candidate.stop() except UtilError as err: if not self.quiet: self._report("Candidate {0} failed to stop. " "{1}".format(hostport, err.errmsg)) # Block writes to slave (temp_master) lock_ftwrl = Lock(temp_master, [], lock_options) temp_master.set_read_only(True) # Connect candidate to slave as its temp_master if self.verbose and not self.quiet: self._report("# Connecting candidate to %s:%s as a temporary " "slave to retrieve unprocessed GTIDs." % (s_host, s_port)) if not candidate.switch_master(temp_master, user, passwd, False, None, None, self.verbose and not self.quiet): msg = "Cannot switch candidate to slave for " + \ "slave promotion process." self._report(msg, logging.CRITICAL) raise UtilRplError(msg) # Unblock writes to slave (temp_master). temp_master.set_read_only(False) lock_ftwrl.unlock() try: candidate.start() candidate.exec_query("COMMIT") except UtilError as err: if not self.quiet: self._report("Candidate {0} failed to start. " "{1}".format(hostport, err.errmsg)) if self.verbose and not self.quiet: self._report("# Waiting for candidate to catch up to slave " "%s:%s." % (s_host, s_port)) temp_master_gtid = temp_master.exec_query(_GTID_EXECUTED) candidate.wait_for_slave_gtid(temp_master_gtid, self.timeout, self.verbose and not self.quiet) # Disconnect candidate from slave (temp_master) candidate.stop() return True def _check_slaves_status(self, stop_on_error=False): """Check all slaves for error before performing failover. This method check the status of all slaves (before the new master catch up with them), using SHOW SLAVE STATUS, reporting any error found and warning the user if failover might result in an inconsistent replication topology. By default the process will not stop, but if the --pedantic option is used then failover will stop with an error. stop_on_error[in] Define the default behavior of failover if errors are found. By default: False (not stop on errors). """ for slave_dict in self.slaves: s_host = slave_dict['host'] s_port = slave_dict['port'] slave = slave_dict['instance'] # Verify if the slave is alive if not slave or not slave.is_alive(): msg = "Slave '{host}@{port}' is not alive.".format(host=s_host, port=s_port) # Print warning or raise an error according to the default # failover behavior and defined options. if ((stop_on_error and not self.force) or (not stop_on_error and self.pedantic)): print("# ERROR: {0}".format(msg)) self._report(msg, logging.CRITICAL, False) if stop_on_error and not self.force: ignore_opt = "with the --force" else: ignore_opt = "without the --pedantic" ignore_tip = ("Note: To ignore this issue use the " "utility {0} option.").format(ignore_opt) raise UtilRplError("{err} {note}".format(err=msg, note=ignore_tip)) else: print("# WARNING: {0}".format(msg)) self._report(msg, logging.WARN, False) continue # Check SQL thread and errors (no need to check for IO errors) # Note: IO errors are excepted as the master is down res = slave.get_sql_error() # First, check if server is acting as a slave if not res: msg = ("Server '{host}@{port}' is not acting as a " "slave.").format(host=s_host, port=s_port) # Print warning or raise an error according to the default # failover behavior and defined options. if ((stop_on_error and not self.force) or (not stop_on_error and self.pedantic)): print("# ERROR: {0}".format(msg)) self._report(msg, logging.CRITICAL, False) if stop_on_error and not self.force: ignore_opt = "with the --force" else: ignore_opt = "without the --pedantic" ignore_tip = ("Note: To ignore this issue use the " "utility {0} option.").format(ignore_opt) raise UtilRplError("{err} {note}".format(err=msg, note=ignore_tip)) else: print("# WARNING: {0}".format(msg)) self._report(msg, logging.WARN, False) continue # Now, check the SQL thread status sql_running = res[0] sql_errorno = res[1] sql_error = res[2] if sql_running == "No" or sql_errorno or sql_error: msg = ("Problem detected with SQL thread for slave " "'{host}'@'{port}' that can result on a unstable " "topology.").format(host=s_host, port=s_port) msg_thread = " - SQL thread running: {0}".format(sql_running) if not sql_errorno and not sql_error: msg_error = " - SQL error: None" else: msg_error = (" - SQL error: {errno} - " "{errmsg}").format(errno=sql_errorno, errmsg=sql_error) msg_tip = ("Check the slave server log to identify " "the problem and fix it. For more information, " "see: http://dev.mysql.com/doc/refman/5.6/en/" "replication-problems.html") # Print warning or raise an error according to the default # failover behavior and defined options. if ((stop_on_error and not self.force) or (not stop_on_error and self.pedantic)): print("# ERROR: {0}".format(msg)) self._report(msg, logging.CRITICAL, False) print("# {0}".format(msg_thread)) self._report(msg_thread, logging.CRITICAL, False) print("# {0}".format(msg_error)) self._report(msg_error, logging.CRITICAL, False) print("# Tip: {0}".format(msg_tip)) if stop_on_error and not self.force: ignore_opt = "with the --force" else: ignore_opt = "without the --pedantic" ignore_tip = ("Note: To ignore this issue use the " "utility {0} option.").format(ignore_opt) raise UtilRplError("{err} {note}".format(err=msg, note=ignore_tip)) else: print("# WARNING: {0}".format(msg)) self._report(msg, logging.WARN, False) print("# {0}".format(msg_thread)) self._report(msg_thread, logging.WARN, False) print("# {0}".format(msg_error)) self._report(msg_error, logging.WARN, False) print("# Tip: {0}".format(msg_tip)) def find_errant_transactions(self): """Check all slaves for the existence of errant transactions. In particular, for all slaves it search for executed transactions that are not found on the other slaves (only on one slave) and not from the current master. Returns a list of tuples, each tuple containing the slave host, port and set of corresponding errant transactions, i.e.: [(host1, port1, set1), ..., (hostn, portn, setn)]. If no errant transactions are found an empty list is returned. """ res = [] # Get master UUID (if master is available otherwise get it from slaves) use_master_uuid_from_slave = True if self.master: master_uuid = self.master.get_uuid() use_master_uuid_from_slave = False # Check all slaves for executed transactions not in other slaves for slave_dict in self.slaves: slave = slave_dict['instance'] # Skip not defined or dead slaves if not slave or not slave.is_alive(): continue tnx_set = slave.get_executed_gtid_set() # Get master UUID from slave if master is not available if use_master_uuid_from_slave: master_uuid = slave.get_master_uuid() slave_set = set() for others_slave_dic in self.slaves: if (slave_dict['host'] != others_slave_dic['host'] or slave_dict['port'] != others_slave_dic['port']): other_slave = others_slave_dic['instance'] # Skip not defined or dead slaves if not other_slave or not other_slave.is_alive(): continue errant_res = other_slave.exec_query( _GTID_SUBTRACT_TO_EXECUTED.format(tnx_set)) # Only consider the transaction as errant if not from the # current master. # Note: server UUID can appear with mixed cases (e.g. for # 5.6.9 servers the server_uuid is lower case and appears # in upper cases in the GTID_EXECUTED set. errant_set = set() for tnx in errant_res: if tnx[0] and not tnx[0].lower().startswith( master_uuid.lower()): errant_set.update(tnx[0].split(',\n')) # Errant transactions exist on only one slave, therefore if # the returned set is empty the loop can be break # (no need to check the remaining slaves). if not errant_set: break slave_set = slave_set.union(errant_set) # Store result if slave_set: res.append((slave_dict['host'], slave_dict['port'], slave_set)) return res def _check_all_slaves(self, new_master): """Check all slaves for errors. Check each slave's status for errors during replication. If errors are found, they are printed as warning statements to stdout. new_master[in] the new master in Master class instance """ slave_errors = [] for slave_dict in self.slaves: slave = slave_dict['instance'] # skip dead or zombie slaves if slave is None or not slave.is_alive(): continue rpl = Replication(new_master, slave, self.options) # Use pingtime to check slave status iteration = 0 slave_ok = True while iteration < int(self.pingtime): res = rpl.check_slave_connection() if not res and iteration >= self.pingtime: slave_error = None if self.verbose: res = slave.get_io_error() slave_error = "%s:%s" % (res[1], res[2]) slave_errors.append((slave_dict['host'], slave_dict['port'], slave_error)) slave_ok = False if self.verbose and not self.quiet: self._report("# %s:%s status: FAIL " % (slave_dict['host'], slave_dict['port']), logging.WARN) elif res: iteration = int(self.pingtime) + 1 else: time.sleep(1) iteration += 1 if slave_ok and self.verbose and not self.quiet: self._report("# %s:%s status: Ok " % (slave_dict['host'], slave_dict['port'])) if len(slave_errors) > 0: self._report("WARNING - The following slaves failed to connect to " "the new master:", logging.WARN) for error in slave_errors: self._report(" - %s:%s" % (error[0], error[1]), logging.WARN) if self.verbose and error[2] is not None: self._report(error[2], logging.WARN) else: print return False return True def remove_slave(self, slave): """Remove a slave from the slaves dictionary list slave[in] the dictionary for the slave to remove """ for i, slave_dict in enumerate(self.slaves): if (slave_dict['instance'] and slave_dict['instance'].is_alias(slave['host']) and int(slave_dict['port']) == int(slave['port'])): # Disconnect to satisfy new server restrictions on termination self.slaves[i]['instance'].disconnect() self.slaves.pop(i) break def gtid_enabled(self): """Check if topology has GTID turned on. This method check if GTID mode is turned ON for all servers in the replication topology, skipping the check for not available servers. Returns bool - True = GTID_MODE=ON for all available servers (master and slaves) in the replication topology.. """ if self.master and self.master.supports_gtid() != "ON": return False # GTID disabled or not supported. for slave_dict in self.slaves: slave = slave_dict['instance'] # skip dead or zombie slaves if slave is None or not slave.is_alive(): continue if slave.supports_gtid() != "ON": return False # GTID disabled or not supported. # GTID enabled for all topology (excluding not available servers). return True def get_servers_with_gtid_not_on(self): """Get the list of servers from the topology with GTID turned off. Note: not connected slaves will be ignored Returns a list of tuples identifying the slaves (host, port, gtid_mode) with GTID_MODE=OFF or GTID_MODE=NO (i.e., not available). """ res = [] # Check master GTID_MODE if self.master: gtid_mode = self.master.supports_gtid() if gtid_mode != "ON": res.append((self.master.host, self.master.port, gtid_mode)) # Check slaves GTID_MODE for slave_dict in self.slaves: slave = slave_dict['instance'] # skip not available or not alive slaves if not slave or not slave.is_alive(): continue gtid_mode = slave.supports_gtid() if gtid_mode != "ON": res.append((slave_dict['host'], slave_dict['port'], gtid_mode)) return res def get_health(self): """Retrieve the replication health for the master and slaves. This method will retrieve the replication health of the topology. This includes the following for each server. - host : host name - port : connection port - role : "MASTER" or "SLAVE" - state : UP = connected, WARN = cannot connect but can ping, DOWN = cannot connect nor ping - gtid : ON = gtid supported and turned on, OFF = supported but not enabled, NO = not supported - rpl_health : (master) binlog enabled, (slave) IO tread is running, SQL thread is running, no errors, slave delay < max_delay, read log pos + max_position < master's log position Note: Will show 'ERROR' if there are multiple errors encountered otherwise will display the health check that failed. If verbosity is set, it will show the following additional information. (master) - server version, binary log file, position (slaves) - server version, master's binary log file, master's log position, IO_Thread, SQL_Thread, Secs_Behind, Remaining_Delay, IO_Error_Num, IO_Error Note: The method will return health for the master and slaves or just the slaves if no master is specified. In which case, the master status shall display "no master specified" instead of a status for the connection. Returns tuple - (columns, rows) """ rows = [] columns = [] columns.extend(_HEALTH_COLS) if self.verbosity > 0: columns.extend(_HEALTH_DETAIL_COLS) if self.master: # Get master health rpl_health = self.master.check_rpl_health() self._report("# Getting health for master: %s:%s." % (self.master.host, self.master.port), logging.INFO, False) have_gtid = self.master.supports_gtid() master_data = [ self.master.host, self.master.port, "MASTER", get_server_state(self.master, self.master.host, self.pingtime, self.verbosity > 0), have_gtid, "OK" if rpl_health[0] else ", ".join(rpl_health[1]), ] m_status = self.master.get_status() if len(m_status): master_log, master_log_pos = m_status[0][0:2] else: master_log = None master_log_pos = 0 # Show additional details if verbosity turned on if self.verbosity > 0: master_data.extend([self.master.get_version(), master_log, master_log_pos, "", "", "", "", "", "", "", "", ""]) rows.append(master_data) if have_gtid == "ON": master_gtids = self.master.exec_query(_GTID_EXECUTED) else: # No master makes these impossible to determine. have_gtid = "OFF" master_log = "" master_log_pos = "" # Get the health of the slaves slave_rows = [] for slave_dict in self.slaves: host = slave_dict['host'] port = slave_dict['port'] slave = slave_dict['instance'] if slave is None: rpl_health = (False, ["Cannot connect to slave."]) elif not slave.is_alive(): # Attempt to reconnect to the database server. try: slave.connect() # Connection succeeded. if not slave.is_configured_for_master(self.master): rpl_health = (False, ["Slave is not connected to master."]) slave = None except UtilError: # Connection failed. rpl_health = (False, ["Slave is not alive."]) slave = None elif not self.master: rpl_health = (False, ["No master specified."]) elif not slave.is_configured_for_master(self.master): rpl_health = (False, ["Slave is not connected to master."]) slave = None if self.master and slave is not None: rpl_health = slave.check_rpl_health(self.master, master_log, master_log_pos, self.max_delay, self.max_pos, self.verbosity) # Now, see if filters are in compliance if not self._check_filters(self.master, slave): if rpl_health[0]: errors = rpl_health[1] errors.append("Binary log and Relay log filters " "differ.") rpl_health = (False, errors) slave_data = [ host, port, "SLAVE", get_server_state(slave, host, self.pingtime, self.verbosity > 0), " " if slave is None else slave.supports_gtid(), "OK" if rpl_health[0] else ", ".join(rpl_health[1]), ] # Show additional details if verbosity turned on if self.verbosity > 0: if slave is None: slave_data.extend([""] * 13) else: slave_data.append(slave.get_version()) res = slave.get_rpl_details() if res is not None: slave_data.extend(res) if have_gtid == "ON": gtid_behind = slave.num_gtid_behind(master_gtids) slave_data.extend([gtid_behind]) else: slave_data.extend([""]) else: slave_data.extend([""] * 13) slave_rows.append(slave_data) # order the slaves slave_rows.sort(key=operator.itemgetter(0, 1)) rows.extend(slave_rows) return (columns, rows) def get_server_uuids(self): """Return a list of the server's uuids. Returns list of tuples = (host, port, role, uuid) """ # Get the master's uuid uuids = [] uuids.append((self.master.host, self.master.port, "MASTER", self.master.get_uuid())) for slave_dict in self.slaves: uuids.append((slave_dict['host'], slave_dict['port'], "SLAVE", slave_dict['instance'].get_uuid())) return uuids def get_gtid_data(self): """Get the GTID information from the topology This method retrieves the executed, purged, and owned GTID lists from the servers in the topology. It arranges them into three lists and includes the host name, port, and role of each server. Returns tuple - lists for GTID data """ executed = [] purged = [] owned = [] gtid_data = self._get_server_gtid_data(self.master, "MASTER") if gtid_data is not None: executed.extend(gtid_data[0]) purged.extend(gtid_data[1]) owned.extend(gtid_data[2]) for slave_dict in self.slaves: slave = slave_dict['instance'] if slave is not None: gtid_data = self._get_server_gtid_data(slave, "SLAVE") if gtid_data is not None: executed.extend(gtid_data[0]) purged.extend(gtid_data[1]) owned.extend(gtid_data[2]) return (executed, purged, owned) def get_slaves_dict(self, skip_not_connected=True): """Get a dictionary representation of the slaves in the topology. This function converts the list of slaves in the topology to a dictionary with all elements in the list, using 'host@port' as the key for each element. skip_not_connected[in] Boolean value indicating if not available or not connected slaves should be skipped. By default 'True' (not available slaves are skipped). Return a dictionary representation of the slaves in the topology. Each element has a key with the format 'host@port' and a dictionary value with the corresponding slave's data. """ res = {} for slave_dic in self.slaves: slave = slave_dic['instance'] if skip_not_connected: if slave and slave.is_alive(): key = '{0}@{1}'.format(slave_dic['host'], slave_dic['port']) res[key] = slave_dic else: key = '{0}@{1}'.format(slave_dic['host'], slave_dic['port']) res[key] = slave_dic return res def slaves_gtid_subtract_executed(self, gtid_set, multithreading=False): """Subtract GTID_EXECUTED from the given GTID set on all slaves. Compute the difference between the given GTID set and the GTID_EXECUTED set for each slave, providing the sets with the missing GTIDs from the GTID_EXECUTED set that belong to the input GTID set. gtid_set[in] Input GTID set to find the missing element from the GTID_EXECUTED for all slaves. multithreading[in] Flag indicating if multithreading will be used, meaning that the operation will be performed concurrently on all slaves. By default True (concurrent execution). Return a list of tuples with the result for each slave. Each tuple contains the identification of the server (host and port) and a string representing the set of GTIDs from the given set not in the GTID_EXECUTED set of the corresponding slave. """ if multithreading: # Create a pool of threads to execute the method for each slave. pool = ThreadPool(processes=len(self.slaves)) res_lst = [] for slave_dict in self.slaves: slave = slave_dict['instance'] if slave: # Skip non existing (not connected) slaves. thread_res = pool.apply_async(slave.gtid_subtract_executed, (gtid_set, )) res_lst.append((slave.host, slave.port, thread_res)) pool.close() # Wait for all threads to finish here to avoid RuntimeErrors when # waiting for the result of a thread that is already dead. pool.join() # Get the result from each slave and return the results. res = [] for host, port, thread_res in res_lst: res.append((host, port, thread_res.get())) return res else: res = [] # Subtract gtid set on all slaves. for slave_dict in self.slaves: slave = slave_dict['instance'] if slave: # Skip non existing (not connected) slaves. not_in_set = slave.gtid_subtract_executed(gtid_set) res.append((slave.host, slave.port, not_in_set)) return res def check_privileges(self, failover=False, skip_master=False): """Check privileges for the master and all known servers failover[in] if True, check permissions for switchover and failover commands. Default is False. skip_master[in] Skip the check for the master. Returns list - [(user, host)] if not enough permissions, [] if no errors """ servers = [] errors = [] # Collect all users first. if skip_master: for slave_conn in self.slaves: slave = slave_conn['instance'] # A slave instance is None if the connection failed during the # creation of the topology. In this case ignore the slave. if slave is not None: servers.append(slave) else: if self.master is not None: servers.append(self.master) for slave_conn in self.slaves: slave = slave_conn['instance'] # A slave instance is None if the connection failed during # the creation of the topology. In this case ignore the # slave. if slave is not None: servers.append(slave) # If candidates were specified, check those too. candidates = self.options.get("candidates", None) candidate_slaves = [] if candidates: self._report("# Checking privileges on candidates.") for candidate in candidates: slave_dict = self.connect_candidate(candidate, False) slave = slave_dict['instance'] if slave is not None: servers.append(slave) candidate_slaves.append(slave) for server in servers: user_inst = User(server, "{0}@{1}".format(server.user, server.host)) if not failover: if not user_inst.has_privilege("*", "*", "SUPER"): errors.append((server.user, server.host, server.port, 'SUPER')) else: if (not user_inst.has_privilege("*", "*", "SUPER") or not user_inst.has_privilege("*", "*", "GRANT OPTION") or not user_inst.has_privilege("*", "*", "SELECT") or not user_inst.has_privilege("*", "*", "RELOAD") or not user_inst.has_privilege("*", "*", "DROP") or not user_inst.has_privilege("*", "*", "CREATE") or not user_inst.has_privilege("*", "*", "INSERT") or not user_inst.has_privilege("*", "*", "REPLICATION SLAVE")): errors.append((server.user, server.host, server.port, 'SUPER, GRANT OPTION, REPLICATION SLAVE, ' 'SELECT, RELOAD, DROP, CREATE, INSERT')) # Disconnect if we connected to any candidates for slave in candidate_slaves: slave.disconnect() return errors def run_cmd_on_slaves(self, command, quiet=False): """Run a command on a list of slaves. This method will run one of the following slave commands. start - START SLAVE; stop - STOP SLAVE; reset - STOP SLAVE; RESET SLAVE; command[in] command to execute quiet[in] If True, do not print messages Default is False :param command: :param quiet: """ assert (self.slaves is not None), \ "No slaves specified or connections failed." self._report("# Performing %s on all slaves." % command.upper()) for slave_dict in self.slaves: hostport = "%s:%s" % (slave_dict['host'], slave_dict['port']) msg = "# Executing %s on slave %s " % (command, hostport) slave = slave_dict['instance'] # skip dead or zombie slaves if not slave or not slave.is_alive(): message = "{0}WARN - cannot connect to slave".format(msg) self._report(message, logging.WARN) elif command == 'reset': if (self.master and not slave.is_configured_for_master(self.master) and not quiet): message = ("{0}WARN - slave is not configured with this " "master").format(msg) self._report(message, logging.WARN) try: slave.reset() except UtilError: if not quiet: message = "{0}WARN - slave failed to reset".format(msg) self._report(message, logging.WARN) else: if not quiet: self._report("{0}Ok".format(msg)) elif command == 'start': if (self.master and not slave.is_configured_for_master(self.master) and not quiet): message = ("{0}WARN - slave is not configured with this " "master").format(msg) self._report(message, logging.WARN) try: slave.start() except UtilError: if not quiet: message = "{0}WARN - slave failed to start".format(msg) self._report(message, logging.WARN) else: if not quiet: self._report("{0}Ok".format(msg)) elif command == 'stop': if (self.master and not slave.is_configured_for_master(self.master) and not quiet): message = ("{0}WARN - slave is not configured with this " "master").format(msg) self._report(message, logging.WARN) elif not slave.is_connected() and not quiet: message = ("{0}WARN - slave is not connected to " "master").format(msg) self._report(message, logging.WARN) try: slave.stop() except UtilError: if not quiet: message = "{0}WARN - slave failed to stop".format(msg) self._report(message, logging.WARN) else: if not quiet: self._report("{0}Ok".format(msg)) def connect_candidate(self, candidate, master=True): """Parse and connect to the candidate This method parses the candidate string and returns a slave dictionary if master=False else returns a Master class instance. candidate[in] candidate connection string master[in] if True, make Master class instance Returns slave_dict or Master class instance """ # Need instance of Master class for operation conn_dict = { 'conn_info': candidate, 'quiet': True, 'verbose': self.verbose, } if master: m_candidate = Master(conn_dict) m_candidate.connect() return m_candidate else: s_candidate = Slave(conn_dict) s_candidate.connect() slave_dict = { 'host': s_candidate.host, 'port': s_candidate.port, 'instance': s_candidate, } return slave_dict def switchover(self, candidate): """Perform switchover from master to candidate slave. This method switches the role of master to a candidate slave. The candidate is checked for viability before the switch is made. If the user specified --demote-master, the method will make the old master a slave of the candidate. candidate[in] the connection information for the --candidate option Return bool - True = success, raises exception on error """ # Need instance of Master class for operation m_candidate = self.connect_candidate(candidate) # Switchover needs to succeed and prerequisites must be met else abort. self._report("# Checking candidate slave prerequisites.") try: self._check_switchover_prerequisites(m_candidate) except UtilError, e: self._report("ERROR: %s" % e.errmsg, logging.ERROR) if not self.force: return # Check if the slaves are configured for the specified master self._report("# Checking slaves configuration to master.") for slave_dict in self.slaves: slave = slave_dict['instance'] # Skip not defined or alive slaves (Warning displayed elsewhere) if not slave or not slave.is_alive(): continue if not slave.is_configured_for_master(self.master): # Slave not configured for master (i.e. not in topology) msg = ("Slave {0}:{1} is not configured with master {2}:{3}" ".").format(slave_dict['host'], slave_dict['port'], self.master.host, self.master.port) print("# ERROR: {0}".format(msg)) self._report(msg, logging.ERROR, False) if not self.force: raise UtilRplError("{0} Note: If you want to ignore this " "issue, please use the utility with " "the --force option.".format(msg)) # Check rpl-user definitions if self.verbose and self.rpl_user: if self.check_master_info_type("TABLE"): msg = ("# When the master_info_repository variable is set to" " TABLE, the --rpl-user option is ignored and the" " existing replication user values are retained.") self._report(msg, logging.INFO) self.rpl_user = None else: msg = ("# When the master_info_repository variable is set to" " FILE, the --rpl-user option may be used only if the" " user specified matches what is shown in the SLAVE" " STATUS output unless the --force option is used.") self._report(msg, logging.INFO) user, passwd = self._get_rpl_user(m_candidate) if not passwd: passwd = '' if not self.check_master_info_type("TABLE"): slave_candidate = self._change_role(m_candidate, slave=True) rpl_master_user = slave_candidate.get_rpl_master_user() if not self.force: if (user != rpl_master_user): msg = ("The replication user specified with --rpl-user " "does not match the existing replication user.\n" "Use the --force option to use the " "replication user specified with --rpl-user.") self._report("ERROR: %s" % msg, logging.ERROR) return # Can't get rpl pass from remote master_repo=file # but it can get the current used hashed to be compared. slave_qry = slave_candidate.exec_query # Use the correct query for server version (changed for 5.7.6) if slave_candidate.check_version_compat(5, 7, 6): query = _SELECT_RPL_USER_PASS_QUERY_5_7_6 else: query = _SELECT_RPL_USER_PASS_QUERY passwd_hash = slave_qry(query.format(user=user, host=m_candidate.host)) # if user does not exist passwd_hash will be an empty query. if passwd_hash: passwd_hash = passwd_hash[0][3] else: passwd_hash = "" # now hash the given rpl password from --rpl-user. # TODO: Remove the use of PASSWORD(), depercated from 5.7.6. rpl_master_pass = slave_qry("SELECT PASSWORD('%s');" % passwd) rpl_master_pass = rpl_master_pass[0][0] if (rpl_master_pass != passwd_hash): if passwd == '': msg = ("The specified replication user is using a " "password (but none was specified).\n" "Use the --force option to force the use of " "the user specified with --rpl-user and no " "password.") else: msg = ("The specified replication user is using a " "different password that the one specified.\n" "Use the --force option to force the use of " "the user specified with --rpl-user and new " "password.") self._report("ERROR: %s" % msg, logging.ERROR) return # Use the correct query for server (changed for 5.7.6). if self.master.check_version_compat(5, 7, 6): query = _UPDATE_RPL_USER_QUERY_5_7_6 else: query = _UPDATE_RPL_USER_QUERY self.master.exec_query(query.format(user=user, passwd=passwd)) if self.verbose: self._report("# Creating replication user if it does not exist.") res = m_candidate.create_rpl_user(m_candidate.host, m_candidate.port, user, passwd, ssl=self.ssl) if not res[0]: print("# ERROR: {0}".format(res[1])) self._report(res[1], logging.CRITICAL, False) # Call exec_before script - display output if verbose on self.run_script(self.before_script, False, [self.master.host, self.master.port, m_candidate.host, m_candidate.port]) if self.verbose: self._report("# Blocking writes on master.") lock_options = { 'locking': 'flush', 'verbosity': 3 if self.verbose else self.verbosity, 'silent': self.quiet, 'rpl_mode': "master", } lock_ftwrl = Lock(self.master, [], lock_options) self.master.set_read_only(True) # Wait for all slaves to catch up. gtid_enabled = self.master.supports_gtid() == "ON" if gtid_enabled: master_gtid = self.master.exec_query(_GTID_EXECUTED) self._report("# Waiting for slaves to catch up to old master.") for slave_dict in self.slaves: master_info = self.master.get_status()[0] slave = slave_dict['instance'] # skip dead or zombie slaves, and print warning if not slave or not slave.is_alive(): if self.verbose: msg = ("Slave {0}:{1} skipped (not " "reachable)").format(slave_dict['host'], slave_dict['port']) print("# WARNING: {0}".format(msg)) self._report(msg, logging.WARNING, False) continue if gtid_enabled: print_query = self.verbose and not self.quiet res = slave.wait_for_slave_gtid(master_gtid, self.timeout, print_query) else: res = slave.wait_for_slave(master_info[0], master_info[1], self.timeout) if not res: msg = "Slave %s:%s did not catch up to the master." % \ (slave_dict['host'], slave_dict['port']) if not self.force: self._report(msg, logging.CRITICAL) raise UtilRplError(msg) else: self._report("# %s" % msg) # Stop all slaves self._report("# Stopping slaves.") self.run_cmd_on_slaves("stop", not self.verbose) # Unblock master self.master.set_read_only(False) lock_ftwrl.unlock() # Make master a slave (if specified) if self.options.get("demote", False): self._report("# Demoting old master to be a slave to the " "new master.") slave = self._change_role(self.master) slave.stop() slave_dict = { 'host': self.master.host, # host name for slave 'port': self.master.port, # port for slave 'instance': slave, # Slave class instance } self.slaves.append(slave_dict) # Move candidate slave to master position in lists self.master_vals = m_candidate.get_connection_values() self.master = m_candidate # Remove slave from list of slaves self.remove_slave({'host': m_candidate.host, 'port': m_candidate.port, 'instance': m_candidate}) # Make new master forget was an slave using slave methods s_candidate = self._change_role(m_candidate) s_candidate.reset_all() # Switch all slaves to new master self._report("# Switching slaves to new master.") new_master_info = m_candidate.get_status()[0] master_values = { 'Master_Host': m_candidate.host, 'Master_Port': m_candidate.port, 'Master_User': user, 'Master_Password': passwd, 'Master_Log_File': new_master_info[0], 'Read_Master_Log_Pos': new_master_info[1], } # Use the options SSL certificates if defined, # else use the master SSL certificates if defined. if self.ssl: master_values['Master_SSL_Allowed'] = 1 if self.ssl_ca: master_values['Master_SSL_CA_File'] = self.ssl_ca if self.ssl_cert: master_values['Master_SSL_Cert'] = self.ssl_cert if self.ssl_key: master_values['Master_SSL_Key'] = self.ssl_key elif m_candidate.has_ssl: master_values['Master_SSL_Allowed'] = 1 master_values['Master_SSL_CA_File'] = m_candidate.ssl_ca master_values['Master_SSL_Cert'] = m_candidate.ssl_cert master_values['Master_SSL_Key'] = m_candidate.ssl_key for slave_dict in self.slaves: slave = slave_dict['instance'] # skip dead or zombie slaves if slave is None or not slave.is_alive(): if self.verbose: self._report("# Skipping CHANGE MASTER for {0}:{1} (not " "connected).".format(slave_dict['host'], slave_dict['port'])) continue if self.verbose: self._report("# Executing CHANGE MASTER on {0}:{1}" ".".format(slave_dict['host'], slave_dict['port'])) change_master = slave.make_change_master(False, master_values) if self.verbose: self._report("# {0}".format(change_master)) slave.exec_query(change_master) # Start all slaves self._report("# Starting all slaves.") self.run_cmd_on_slaves("start", not self.verbose) # Call exec_after script - display output if verbose on self.run_script(self.after_script, False, [self.master.host, self.master.port]) # Check all slaves for status, errors self._report("# Checking slaves for errors.") if not self._check_all_slaves(self.master): return False self._report("# Switchover complete.") return True def _change_role(self, server, slave=True): """Reverse role of Master and Slave classes This method can be used to get a Slave instance from a Master instance or a Master instance from a Slave instance. server[in] Server class instance slave[in] if True, create Slave class instance Default is True Return Slave or Master instance """ conn_dict = { 'conn_info': get_connection_dictionary(server), 'verbose': self.verbose, } if slave and type(server) != Slave: slave_conn = Slave(conn_dict) slave_conn.connect() return slave_conn if not slave and type(server) != Master: master_conn = Master(conn_dict) master_conn.connect() return master_conn return server def find_best_slave(self, candidates=None, check_master=True, strict=False): """Find the best slave This method checks each slave in the topology to determine if it is a viable slave for promotion. It returns the first slave that is determined to be eligible for promotion. The method uses the order of the slaves in the topology as specified by the slaves list to search for a best slave. If a candidate slave is provided, it is checked first. candidates[in] list of candidate connection dictionaries check_master[in] if True, check that slave is connected to the master Default is True strict[in] if True, use only the candidate list for slave election and fail if no candidates are viable. Default = False Returns dictionary = (host, port, instance) for 'best' slave, None = no candidate slaves found """ msg = "None of the candidates was the best slave." for candidate in candidates: slave_dict = self.connect_candidate(candidate, False) slave = slave_dict['instance'] # Ignore dead or offline slaves if slave is None or not slave.is_alive(): continue slave_ok = self._check_candidate_eligibility(slave.host, slave.port, slave, check_master) if slave_ok is not None and slave_ok[0]: return slave_dict else: self._report("# Candidate %s:%s does not meet the " "requirements." % (slave.host, slave.port), logging.WARN) # If strict is on and we have found no viable candidates, return None if strict: self._report("ERROR: %s" % msg, logging.ERROR) return None if candidates is not None and len(candidates) > 0: self._report("WARNING: %s" % msg, logging.WARN) for slave_dict in self.slaves: s_host = slave_dict['host'] s_port = slave_dict['port'] slave = slave_dict['instance'] # skip dead or zombie slaves if slave is None or not slave.is_alive(): continue # Check eligibility try: slave_ok = self._check_candidate_eligibility(s_host, s_port, slave, check_master) if slave_ok is not None and slave_ok[0]: return slave_dict except UtilError, e: self._report("# Slave eliminated due to error: %s" % e.errmsg, logging.WARN) # Slave gone away, skip it. return None def failover(self, candidates, strict=False, stop_on_error=False): """Perform failover to best slave in a GTID-enabled topology. This method performs a failover to one of the candidates specified. If no candidates are specified, the method will use the list of slaves to choose a candidate. In either case, priority is given to the server listed first that meets the prerequisites - a sanity check to ensure if the candidate's GTID_MODE matches the other slaves. In the event the candidates list is exhausted, it will use the slaves list to find a candidate. If no servers are viable, the method aborts. If the strict parameter is True, the search is limited to the candidates list. Once a candidate is selected, the candidate is prepared to become the new master by collecting any missing GTIDs by being made a slave to each of the other slaves. Once prepared, the before script is run to trigger applications, then all slaves are connected to the new master. Once complete, all slaves are started, the after script is run to trigger applications, and the slaves are checked for errors. candidates[in] list of slave connection dictionary of candidate strict[in] if True, use only the candidate list for slave election and fail if no candidates are viable. Default = False stop_on_error[in] Define the default behavior of failover if errors are found. By default: False (not stop on errors). Returns bool - True if successful, raises exception if failure and forst is False """ # Get best slave from list of candidates new_master_dict = self.find_best_slave(candidates, False, strict) if new_master_dict is None: msg = "No candidate found for failover." self._report(msg, logging.CRITICAL) raise UtilRplError(msg) new_master = new_master_dict['instance'] # All servers must have GTIDs match candidate gtid_mode = new_master.supports_gtid() if gtid_mode != "ON": msg = "Failover requires all servers support " + \ "global transaction ids and have GTID_MODE=ON" self._report(msg, logging.CRITICAL) raise UtilRplError(msg) for slave_dict in self.slaves: # Ignore dead or offline slaves slave = slave_dict['instance'] # skip dead or zombie slaves if slave is None or not slave.is_alive(): continue if slave.supports_gtid() != gtid_mode: msg = "Cannot perform failover unless all " + \ "slaves support GTIDs and GTID_MODE=ON" self._report(msg, logging.CRITICAL) raise UtilRplError(msg) # We must also ensure the new master and all remaining slaves # have the latest GTID support. new_master.check_gtid_version() for slave_dict in self.slaves: # Ignore dead or offline slaves slave = slave_dict['instance'] # skip dead or zombie slaves if slave is None or not slave.is_alive(): continue slave.check_gtid_version() host = new_master_dict['host'] port = new_master_dict['port'] # Use try block in case master class has gone away. try: old_host = self.master.host old_port = self.master.port except: old_host = "UNKNOWN" old_port = "UNKNOWN" self._report("# Candidate slave %s:%s will become the new master." % (host, port)) user, passwd = self._get_rpl_user(self._change_role(new_master)) # Check slaves for errors that might result on an unstable topology self._report("# Checking slaves status (before failover).") self._check_slaves_status(stop_on_error) # Prepare candidate self._report("# Preparing candidate for failover.") self._prepare_candidate_for_failover(new_master, user, passwd) # Create replication user on candidate. self._report("# Creating replication user if it does not exist.") # Need Master class instance to check master and replication user self.master = self._change_role(new_master, False) res = self.master.create_rpl_user(host, port, user, passwd, ssl=self.ssl) if not res[0]: print("# ERROR: {0}".format(res[1])) self._report(res[1], logging.CRITICAL, False) # Call exec_before script - display output if verbose on self.run_script(self.before_script, False, [old_host, old_port, host, port]) # Stop all slaves self._report("# Stopping slaves.") self.run_cmd_on_slaves("stop", not self.verbose) # Take the new master out of the slaves list. self.remove_slave(new_master_dict) self._report("# Switching slaves to new master.") for slave_dict in self.slaves: slave = slave_dict['instance'] # skip dead or zombie slaves if slave is None or not slave.is_alive(): continue slave.switch_master(self.master, user, passwd, False, None, None, self.verbose and not self.quiet) # Clean previous replication settings on the new master. self._report("# Disconnecting new master as slave.") # Make sure the new master is not acting as a slave (STOP SLAVE). self.master.exec_query("STOP SLAVE") # Execute RESET SLAVE ALL on the new master. if self.verbose and not self.quiet: self._report("# Execute on {0}:{1}: " "RESET SLAVE ALL".format(self.master.host, self.master.port)) self.master.exec_query("RESET SLAVE ALL") # Starting all slaves self._report("# Starting slaves.") self.run_cmd_on_slaves("start", not self.verbose) # Call exec_after script - display output if verbose on self.run_script(self.after_script, False, [old_host, old_port, host, port]) # Check slaves for errors self._report("# Checking slaves for errors.") if not self._check_all_slaves(self.master): return False self._report("# Failover complete.") return True def get_servers_with_different_sql_mode(self, look_for): """Returns a tuple of two list with all the server instances in the Topology. The first list is the group of server that have the sql_mode given in look_for, the second list is the group of server that does not have this sql_mode. look_for[in] The sql_mode to search for. Returns tuple of Lists - the group of servers instances that have the SQL mode given in look_for, and a group which sql_mode differs from the look_for or an empty list. """ # Fill a dict with keys from the SQL modes names and as items the # servers with the same sql_mode. look_for_list = [] inconsistent_list = [] # Get Master sql_mode if given and clasify it. if self.master is not None: master_sql_mode = self.master.select_variable("SQL_MODE") if look_for in master_sql_mode: look_for_list.append(self.master) else: inconsistent_list.append(self.master) # Fill the lists with the slaves deppending of his sql_mode. for slave_dict in self.slaves: slave = slave_dict['instance'] slave_sql_mode = slave.select_variable("SQL_MODE") if look_for in slave_sql_mode: look_for_list.append(slave) else: inconsistent_list.append(slave) return look_for_list, inconsistent_list mysql-utilities-1.6.4/mysql/utilities/common/table.py0000755001577100752670000016467112747670311022525 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains abstractions of a MySQL table and an index. """ import multiprocessing import sys from itertools import izip from mysql.utilities.exception import UtilError, UtilDBError from mysql.connector.conversion import MySQLConverter from mysql.utilities.common.format import print_list from mysql.utilities.common.database import Database from mysql.utilities.common.lock import Lock from mysql.utilities.common.pattern_matching import parse_object_name from mysql.utilities.common.server import Server from mysql.utilities.common.sql_transform import (convert_special_characters, quote_with_backticks, remove_backtick_quoting, is_quoted_with_backticks) # Constants _MAXPACKET_SIZE = 1024 * 1024 _MAXBULK_VALUES = 25000 _MAXTHREADS_INSERT = 6 _MAXROWS_PER_THREAD = 100000 _MAXAVERAGE_CALC = 100 _FOREIGN_KEY_QUERY = """ SELECT CONSTRAINT_NAME, COLUMN_NAME, REFERENCED_TABLE_SCHEMA, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE TABLE_SCHEMA = '%s' AND TABLE_NAME = '%s' AND REFERENCED_TABLE_SCHEMA IS NOT NULL """ class Index(object): """ The Index class encapsulates an index for a given table as defined by the output of SHOW INDEXES FROM. The class has the following capabilities: - Check for duplicates - Create DROP statement for index - Print index CREATE statement """ def __init__(self, db, index_tuple, verbose=False, sql_mode=''): """Constructor db[in] Name of database index_tuple[in] A tuple from the get_tbl_indexes() result set verbose[in] print extra data during operations (optional) default value = False """ # Initialize and save values self.db = db self.sql_mode = sql_mode self.q_db = quote_with_backticks(db, self.sql_mode) self.verbose = verbose self.columns = [] self.table = index_tuple[0] self.q_table = quote_with_backticks(index_tuple[0], self.sql_mode) self.unique = not int(index_tuple[1]) self.name = index_tuple[2] self.q_name = quote_with_backticks(index_tuple[2], self.sql_mode) col = (index_tuple[4], index_tuple[7]) self.columns.append(col) self.accept_nulls = True if index_tuple[9] else False self.type = index_tuple[10] self.compared = False # mark as compared for speed self.duplicate_of = None # saves duplicate index if index_tuple[7] > 0: self.column_subparts = True # check subparts e.g. a(20) else: self.column_subparts = False @staticmethod def __cmp_columns(col_a, col_b): """Compare two columns on name and subpart lengths if present col_a[in] First column to compare col_b[in] Second column to compare Returns True if col_a has the same name as col_b and if the subparts are col_a.sub <= col_b.sub. """ sz_this = col_a[1] sz_that = col_b[1] # if column has the same name if col_a[0] == col_b[0]: # if they both have sub_parts, compare them if sz_this and sz_that: if sz_this <= sz_that: return True else: return False # if this index has a sub_part and the other does # not, it is potentially redundant elif sz_this and sz_that is None: return True # if neither have sub_parts, it is a match elif sz_this is None and sz_that is None: return True else: return False # no longer a duplicate def __check_column_list(self, index): """Compare the column list of this index with another index[in] Instance of Index to compare Returns True if column list is a subset of index. """ num_cols_this = len(self.columns) num_cols_that = len(index.columns) same_size = num_cols_this == num_cols_that if self.type == "BTREE": indexes = izip(self.columns, index.columns) for idx_pair in indexes: if not self.__cmp_columns(*idx_pair): return False # All index pairs are the same, so return index with smaller number # of columns. return num_cols_this <= num_cols_that else: # HASH, RTREE, FULLTEXT if self.type != "FULLTEXT": # For RTREE or HASH type indexes, an index is redundant if # it has the exact same columns on the exact same order. indexes = izip(self.columns, index.columns) return (same_size and all((self.__cmp_columns(*idx_pair) for idx_pair in indexes))) else: # FULLTEXT index # A FULLTEXT index A is redundant of FULLTEXT index B if # the columns of A are a subset of B's columns, the order # does not matter. return all(any(self.__cmp_columns(col, icol) for icol in index.columns) for col in self.columns) def is_duplicate(self, index): """Compare this index with another index[in] Instance of Index to compare Returns True if this index is a subset of the Index presented. """ # Don't compare the same index - no two indexes can have the same name if self.name == index.name: return False else: return self.__check_column_list(index) def contains_columns(self, col_names): """Check if the current index contains the columns of the given index. Returns True if it contains all the columns of the given index, otherwise False. """ if len(self.columns) < len(col_names): # If has less columns than given index it does not contain all. return False else: this_col_names = [col[0] for col in self.columns] # Check if all index column are included in current one.. for col_name in col_names: if col_name not in this_col_names: return False # found one column not included. # Pass previous verification; contains all the columns of given index. return True def add_column(self, column, sub_part, accept_null): """Add a column to the list of columns for this index column[in] Column to add sub_part[in] Sub part of colunm e.g. a(20) accept_null[in] True to indicate the column accepts nulls """ col = (column, sub_part) if sub_part > 0: self.column_subparts = True if accept_null: self.accept_nulls = True self.columns.append(col) def get_drop_statement(self): """Get the drop statement for this index Note: Ignores PRIMARY key indexes. Returns the DROP statement for this index. """ if self.name == "PRIMARY": return None query_str = "ALTER TABLE {db}.{table} DROP INDEX {name}".format( db=self.q_db, table=self.q_table, name=self.q_name ) return query_str def get_remove_columns_statement(self, col_names): """Get the ALTER TABLE statement to remove columns for this index. col_names[in] list of columns names to remove from the index. Returns the ALTER TABLE statement (DROP/ADD) to remove the given columns names from the index. """ # Create the new columns list for the index. idx_cols = [col[0] for col in self.columns if col[0] not in col_names] if not idx_cols: # Return a DROP statement if no columns are left. query_str = "ALTER TABLE {db}.{table} DROP INDEX {name}".format( db=self.q_db, table=self.q_table, name=self.q_name ) else: # Otherwise, return a DROP/ADD statement with remaining columns. idx_cols_str = ', '.join(idx_cols) query_str = ("ALTER TABLE {db}.{table} DROP INDEX {name}, " "ADD INDEX {name} ({cols})".format(db=self.q_db, table=self.q_table, name=self.q_name, cols=idx_cols_str)) return query_str def __get_column_list(self, backtick_quoting=True): """Get the column list for an index This method is used to print the CREATE and DROP statements. backtick_quoting[in] Indicates if the columns names are to be quoted with backticks or not. By default: True. Returns a string representing the list of columns for a column list. e.g. 'a, b(10), c' """ col_list = [] for col in self.columns: name, sub_part = (col[0], col[1]) if backtick_quoting: name = quote_with_backticks(name, self.sql_mode) if sub_part > 0: col_str = "{0}({1})".format(name, sub_part) else: col_str = name col_list.append(col_str) return ', '.join(col_list) def print_index_sql(self): """Print the CREATE INDEX for indexes and ALTER TABLE for a primary key """ if self.name == "PRIMARY": print("ALTER TABLE {db}.{table} ADD PRIMARY KEY ({cols})" "".format(db=self.q_db, table=self.q_table, cols=self.__get_column_list())) else: create_str = ("CREATE {unique}{fulltext}INDEX {name} ON " "{db}.{table} ({cols}) {using}") unique_str = 'UNIQUE ' if self.unique else '' fulltext_str = 'FULLTEXT ' if self.type == 'FULLTEXT' else '' if (self.type == "BTREE") or (self.type == "RTREE"): using_str = 'USING {0}'.format(self.type) else: using_str = '' print(create_str.format(unique=unique_str, fulltext=fulltext_str, name=self.q_name, db=self.q_db, table=self.q_table, cols=self.__get_column_list(), using=using_str)) def get_row(self, verbosity=0): """Return index information as a list of columns for tabular output. """ cols = self.__get_column_list(backtick_quoting=False) if verbosity > 0: return (self.db, self.table, self.name, self.type, self.unique, self.accept_nulls, cols) return (self.db, self.table, self.name, self.type, cols) class Table(object): """ The Table class encapsulates a table for a given database. The class has the following capabilities: - Check to see if the table exists - Check indexes for duplicates and redundancies - Print list of indexes for the table - Extract table data - Import table data - Copy table data """ def __init__(self, server1, name, options=None): """Constructor server[in] A Server object name[in] Name of table in the form (db.table) options[in] options for class: verbose, quiet, get_cols, quiet If True, do not print information messages verbose print extra data during operations (optional) (default is False) get_cols If True, get the column metadata on construction (default is False) """ if options is None: options = {} self.verbose = options.get('verbose', False) self.quiet = options.get('quiet', False) self.server = server1 # Get sql_mode set on server self.sql_mode = self.server.select_variable("SQL_MODE") # Keep table identifier considering backtick quotes if is_quoted_with_backticks(name, self.sql_mode): self.q_table = name self.q_db_name, self.q_tbl_name = parse_object_name(name, self.sql_mode) self.db_name = remove_backtick_quoting(self.q_db_name, self.sql_mode) self.tbl_name = remove_backtick_quoting(self.q_tbl_name, self.sql_mode) self.table = ".".join([self.db_name, self.tbl_name]) else: self.table = name self.db_name, self.tbl_name = parse_object_name(name, self.sql_mode) self.q_db_name = quote_with_backticks(self.db_name, self.sql_mode) self.q_tbl_name = quote_with_backticks(self.tbl_name, self.sql_mode) self.q_table = ".".join([self.q_db_name, self.q_tbl_name]) self.obj_type = "TABLE" self.pri_idx = None # We store each type of index in a separate list to make it easier # to manipulate self.btree_indexes = [] self.hash_indexes = [] self.rtree_indexes = [] self.fulltext_indexes = [] self.unique_not_null_indexes = None self.text_columns = [] self.blob_columns = [] self.bit_columns = [] self.column_format = None self.column_names = [] self.column_name_type = [] self.q_column_names = [] self.indexes_q_names = [] if options.get('get_cols', False): self.get_column_metadata() self.dest_vals = None self.storage_engine = None # Get max allowed packet res = self.server.exec_query("SELECT @@session.max_allowed_packet") if res: self.max_packet_size = res[0][0] else: self.max_packet_size = _MAXPACKET_SIZE # Watch for invalid values if self.max_packet_size > _MAXPACKET_SIZE: self.max_packet_size = _MAXPACKET_SIZE self._insert = "INSERT INTO %s.%s VALUES " self.query_options = { # Used for skipping fetch of rows 'fetch': False } def exists(self, tbl_name=None): """Check to see if the table exists tbl_name[in] table name (db.table) (optional) If omitted, operation is performed on the class instance table name. return True = table exists, False = table does not exist """ db, table = (None, None) if tbl_name: db, table = parse_object_name(tbl_name, self.sql_mode) else: db = self.db_name table = self.tbl_name res = self.server.exec_query("SELECT TABLE_NAME " + "FROM INFORMATION_SCHEMA.TABLES " + "WHERE TABLE_SCHEMA = '%s'" % db + " and TABLE_NAME = '%s'" % table) return (res is not None and len(res) >= 1) def get_column_metadata(self, columns=None): """Get information about the table for the bulk insert operation. This method builds lists that describe the metadata of the table. This includes lists for: column names column format for building VALUES clause blob fields - for use in generating INSERT/UPDATE for blobs text fields - for use in checking for single quotes columns[in] if None, use EXPLAIN else use column list. """ if columns is None: columns = self.server.exec_query("explain %s" % self.q_table) stop = len(columns) self.column_names = [] self.q_column_names = [] col_format_values = [''] * stop if columns is not None: for col in range(0, stop): if is_quoted_with_backticks(columns[col][0], self.sql_mode): self.column_names.append( remove_backtick_quoting(columns[col][0], self.sql_mode)) self.q_column_names.append(columns[col][0]) else: self.column_names.append(columns[col][0]) self.q_column_names.append( quote_with_backticks(columns[col][0], self.sql_mode)) col_type = columns[col][1].lower() if ('char' in col_type or 'enum' in col_type or 'set' in col_type or 'binary' in col_type): self.text_columns.append(col) col_format_values[col] = "'%s'" elif 'blob' in col_type or 'text'in col_type: self.blob_columns.append(col) col_format_values[col] = "%s" elif "date" in col_type or "time" in col_type: col_format_values[col] = "'%s'" elif "bit" in col_type: self.bit_columns.append(col) col_format_values[col] = "%d" else: col_format_values[col] = "%s" self.column_format = "%s%s%s" % \ (" (", ', '.join(col_format_values), ")") def get_col_names(self, quote_backticks=False): """Get column names for the export operation. quote_backticks[in] If True the column names will be quoted with backticks. Default is False. Return (list) column names """ if self.column_format is None: self.column_names = [] self.q_column_names = [] rows = self.server.exec_query("explain {0}".format(self.q_table)) for row in rows: self.column_names.append(row[0]) self.q_column_names.append(quote_with_backticks(row[0], self.sql_mode)) return self.q_column_names if quote_backticks else self.column_names def get_col_names_types(self, quote_backticks=False): """Get a list of tuples of column name and type. quote_backticks[in] If True the column name will be quoted with backticks. Default is False. Return (list) of touple (column name, type) """ self.column_name_type = [] rows = self.server.exec_query("explain {0}".format(self.q_table)) for row in rows: if quote_backticks: self.column_name_type.append( [quote_with_backticks(row[0], self.sql_mode)] + list(row[1:]) ) else: self.column_name_type.append(row) return self.column_name_type def has_index(self, index_q_name): """A method to determine if this table has a determinate index using his name. index_q_name[in] the name of the index (must be quoted). returns True if this Table has an index with the given name, otherwise false. """ if [idx_q_name for idx_q_name in self.indexes_q_names if idx_q_name == index_q_name]: return True return False def get_not_null_unique_indexes(self, refresh=False): """get all the unique indexes which columns does not accepts null values. refresh[in] Boolean value used to force the method to read index information directly from the server, instead of using cached values. Returns list of indexes. """ # First check if the instance variable exists. if self.unique_not_null_indexes is None or refresh: # Get the indexes for the table. try: self.get_indexes() except UtilDBError: # Table may not exist yet. Happens on import operations. pass # Now for each of them, check if they are UNIQUE and NOT NULL. no_null_idxes = [] no_null_idxes.extend( [idx for idx in self.btree_indexes if not idx.accept_nulls and idx.unique] ) no_null_idxes.extend( [idx for idx in self.hash_indexes if not idx.accept_nulls and idx.unique] ) no_null_idxes.extend( [idx for idx in self.rtree_indexes if not idx.accept_nulls and idx.unique] ) no_null_idxes.extend( [idx for idx in self.fulltext_indexes if not idx.accept_nulls and idx.unique] ) self.unique_not_null_indexes = no_null_idxes return self.unique_not_null_indexes def _build_update_blob(self, row, new_db, name): """Build an UPDATE statement to update blob fields. row[in] a row to process new_db[in] new database name name[in] name of the table Returns UPDATE string """ if self.column_format is None: self.get_column_metadata() blob_insert = "UPDATE %s.%s SET " % (new_db, name) where_values = [] do_commas = False has_data = False stop = len(row) for col in range(0, stop): col_name = self.q_column_names[col] if col in self.blob_columns: if row[col] is not None and len(row[col]) > 0: if do_commas: blob_insert += ", " blob_insert += "%s = " % col_name + "%s" % \ MySQLConverter().quote( convert_special_characters(row[col])) has_data = True do_commas = True else: # Convert None values to NULL (not '' to NULL) if row[col] is None: value = 'NULL' else: value = "'{0}'".format(row[col]) where_values.append("{0} = {1}".format(col_name, value)) if has_data: return "{0} WHERE {1};".format(blob_insert, " AND ".join(where_values)) return None def _build_insert_blob(self, row, new_db, tbl_name): """Build an INSERT statement for the given row. row[in] a row to process new_db[in] new database name tbl_name[in] name of the table Returns INSERT string. """ if self.column_format is None: self.get_column_metadata() converter = MySQLConverter() row_vals = [] # Deal with blob, special characters and NULL values. for index, column in enumerate(row): if index in self.blob_columns: row_vals.append(converter.quote( convert_special_characters(column))) elif index in self.text_columns: if column is None: row_vals.append("NULL") else: row_vals.append(convert_special_characters(column)) elif index in self.bit_columns: if column is None: row_vals.append("NULL") else: row_vals.append(converter._BIT_to_python(column)) else: if column is None: row_vals.append("NULL") else: row_vals.append(column) # Create the insert statement. insert_stm = ("INSERT INTO {0}.{1} VALUES {2};" "".format(new_db, tbl_name, self.column_format % tuple(row_vals))) # Replace 'NULL' occurrences with NULL values. insert_stm = insert_stm.replace("'NULL'", "NULL") return insert_stm def get_column_string(self, row, new_db, skip_blobs=False): """Return a formatted list of column data. row[in] a row to process new_db[in] new database name skip_blobs[in] boolean value, if True, blob columns are skipped Returns (string) column list """ if self.column_format is None: self.get_column_metadata() blob_inserts = [] values = list(row) is_blob_insert = False # find if we have some unique column indexes unique_indexes = len(self.get_not_null_unique_indexes()) # If all columns are blobs or there aren't any UNIQUE NOT NULL indexes # then rows won't be correctly copied using the update statement, # so we must use insert statements instead. if not skip_blobs and (len(self.blob_columns) == len(self.column_names) or self.blob_columns and not unique_indexes): blob_inserts.append(self._build_insert_blob(row, new_db, self.q_tbl_name)) is_blob_insert = True else: # Find blobs if self.blob_columns: # Save blob updates for later... blob = self._build_update_blob(row, new_db, self.q_tbl_name) if blob is not None: blob_inserts.append(blob) for col in self.blob_columns: values[col] = "NULL" if not is_blob_insert: # Replace single quotes located in the value for a text field with # the correct special character escape sequence. This fixes SQL # errors related to using single quotes in a string value that is # single quoted. For example, 'this' is it' is changed to # 'this\' is it'. for col in self.text_columns: # Check if the value is not None before replacing quotes if values[col]: # Apply escape sequences to special characters values[col] = convert_special_characters(values[col]) for col in self.bit_columns: if values[col] is not None: # Convert BIT to INTEGER for dump. values[col] = MySQLConverter()._BIT_to_python(values[col]) # Build string (add quotes to "string" like types) val_str = self.column_format % tuple(values) # Change 'None' occurrences with "NULL" val_str = val_str.replace(", None", ", NULL") val_str = val_str.replace("(None", "(NULL") val_str = val_str.replace(", 'None'", ", NULL") val_str = val_str.replace("('None'", "(NULL") else: val_str = None return val_str, blob_inserts def make_bulk_insert(self, rows, new_db, columns_names=None, skip_blobs=False): """Create bulk insert statements for the data Reads data from a table (rows) and builds group INSERT statements for bulk inserts. Note: This method does not print any information to stdout. rows[in] a list of rows to process new_db[in] new database name skip_blobs[in] boolean value, if True, blob columns are skipped Returns (tuple) - (bulk insert statements, blob data inserts) """ if self.column_format is None: self.get_column_metadata() data_inserts = [] blob_inserts = [] row_count = 0 data_size = 0 val_str = None for row in rows: if row_count == 0: if columns_names: insert_str = "INSERT INTO {0}.{1} ({2}) VALUES ".format( new_db, self.q_tbl_name, ", ".join(columns_names) ) else: insert_str = self._insert % (new_db, self.q_tbl_name) if val_str: row_count += 1 insert_str += val_str data_size = len(insert_str) col_data = self.get_column_string(row, new_db, skip_blobs) if len(col_data[1]) > 0: blob_inserts.extend(col_data[1]) if col_data[0]: val_str = col_data[0] row_size = len(val_str) next_size = data_size + row_size + 3 if ((row_count >= _MAXBULK_VALUES) or (next_size > (int(self.max_packet_size) - 512))): # add to buffer data_inserts.append(insert_str) row_count = 0 else: row_count += 1 if row_count > 1: insert_str += ", " insert_str += val_str data_size += row_size + 3 if row_count > 0: data_inserts.append(insert_str) return data_inserts, blob_inserts def get_storage_engine(self): """Get the storage engine (in UPPERCASE) for the table. Returns the name in UPPERCASE of the storage engine use for the table or None if the information is not found. """ self.server.exec_query("USE {0}".format(self.q_db_name), self.query_options) res = self.server.exec_query( "SHOW TABLE STATUS WHERE name = '{0}'".format(self.tbl_name) ) try: # Return store engine converted to UPPER cases. return res[0][1].upper() if res[0][1] else None except IndexError: # Return None if table status information is not available. return None def get_segment_size(self, num_conn=1): """Get the segment size based on number of connections (threads). num_conn[in] Number of threads(connections) to use Default = 1 (one large segment) Returns (int) segment_size Note: if num_conn <= 1 - returns number of rows """ # Get number of rows num_rows = 0 try: res = self.server.exec_query("USE %s" % self.q_db_name, self.query_options) except: pass res = self.server.exec_query("SHOW TABLE STATUS LIKE '%s'" % self.tbl_name) if res: num_rows = int(res[0][4]) if num_conn <= 1: return num_rows # Calculate number of threads and segment size to fetch thread_limit = num_conn if thread_limit > _MAXTHREADS_INSERT: thread_limit = _MAXTHREADS_INSERT if num_rows > (_MAXROWS_PER_THREAD * thread_limit): max_threads = thread_limit else: max_threads = int(num_rows / _MAXROWS_PER_THREAD) if max_threads == 0: max_threads = 1 if max_threads > 1 and self.verbose: print "# Using multi-threaded insert option. Number of " \ "threads = %d." % max_threads return (num_rows / max_threads) + max_threads def _bulk_insert(self, rows, new_db, destination=None): """Import data using bulk insert Reads data from a table and builds group INSERT statements for writing to the destination server specified (new_db.name). This method is designed to be used in a thread for parallel inserts. As such, it requires its own connection to the destination server. Note: This method does not print any information to stdout. rows[in] a list of rows to process new_db[in] new database name destination[in] the destination server """ if self.dest_vals is None: self.dest_vals = self.get_dest_values(destination) # Spawn a new connection server_options = { 'conn_info': self.dest_vals, 'role': "thread", } dest = Server(server_options) dest.connect() # Test if SQL_MODE is 'NO_BACKSLASH_ESCAPES' in the destination server if dest.select_variable("SQL_MODE") == "NO_BACKSLASH_ESCAPES": # Change temporarily the SQL_MODE in the destination server dest.exec_query("SET @@SESSION.SQL_MODE=''") # Issue the write lock lock_list = [("%s.%s" % (new_db, self.q_tbl_name), 'WRITE')] my_lock = Lock(dest, lock_list, {'locking': 'lock-all', }) # First, turn off foreign keys if turned on dest.disable_foreign_key_checks(True) if self.column_format is None: self.get_column_metadata() data_lists = self.make_bulk_insert(rows, new_db) insert_data = data_lists[0] blob_data = data_lists[1] # Insert the data first for data_insert in insert_data: try: dest.exec_query(data_insert, self.query_options) except UtilError, e: raise UtilError("Problem inserting data. " "Error = %s" % e.errmsg) # Now insert the blob data if there is any for blob_insert in blob_data: try: dest.exec_query(blob_insert, self.query_options) except UtilError, e: raise UtilError("Problem updating blob field. " "Error = %s" % e.errmsg) # Now, turn on foreign keys if they were on at the start dest.disable_foreign_key_checks(False) my_lock.unlock() del dest def insert_rows(self, rows, new_db, destination=None, spawn=False): """Insert rows in the table using bulk copy. This method opens a new connect to the destination server to insert the data with a bulk copy. If spawn is True, the method spawns a new process and returns it. This allows for using a multi-threaded insert which can be faster on some platforms. If spawn is False, the method will open a new connection to insert the data. num_conn[in] Number of threads(connections) to use for insert rows[in] List of rows to insert new_db[in] Rename the db to this name destination[in] Destination server Default = None (copy to same server) spawn[in] If True, spawn a new process for the insert Default = False Returns If spawn == True, process If spawn == False, None """ if self.column_format is None: self.get_column_metadata() if self.dest_vals is None: self.dest_vals = self.get_dest_values(destination) proc = None if spawn: proc = multiprocessing.Process(target=self._bulk_insert, args=(rows, new_db, destination)) else: self._bulk_insert(rows, new_db, destination) return proc def _clone_data(self, new_db): """Clone table data. This method will copy all of the data for a table from the old database to the new database on the same server. new_db[in] New database name for the table """ query_str = "INSERT INTO %s.%s SELECT * FROM %s.%s" % \ (new_db, self.q_tbl_name, self.q_db_name, self.q_tbl_name) if self.verbose and not self.quiet: print query_str # Disable foreign key checks to allow data to be copied without running # into foreign key referential integrity issues self.server.disable_foreign_key_checks(True) self.server.exec_query(query_str) self.server.disable_foreign_key_checks(False) def copy_data(self, destination, cloning=False, new_db=None, connections=1): """Retrieve data from a table and copy to another server and database. Reads data from a table and inserts the correct INSERT statements into the file provided. Note: if connections < 1 - retrieve the data one row at-a-time destination[in] Destination server cloning[in] If True, we are copying on the same server new_db[in] Rename the db to this name connections[in] Number of threads(connections) to use for insert """ # Get sql_mode from destination dest_sql_mode = destination.select_variable("SQL_MODE") if new_db is None: new_db = self.q_db_name else: # If need quote new_db identifier with backticks if not is_quoted_with_backticks(new_db, dest_sql_mode): new_db = quote_with_backticks(new_db, dest_sql_mode) num_conn = int(connections) if cloning: self._clone_data(new_db) else: # Read and copy the data pthreads = [] # Change the sql_mode if the mode is different on each server # and if "ANSI_QUOTES" is set in source, this is for # compatibility between the names. prev_sql_mode = '' if self.sql_mode != dest_sql_mode and \ "ANSI_QUOTES" in self.sql_mode: prev_sql_mode = self.server.select_variable("SQL_MODE") self.server.exec_query("SET @@SESSION.SQL_MODE=''") self.sql_mode = '' self.q_tbl_name = quote_with_backticks( self.tbl_name, self.sql_mode ) self.q_db_name = quote_with_backticks( self.db_name, self.sql_mode ) self.q_table = ".".join([self.q_db_name, self.q_tbl_name]) self.q_column_names = [] for column in self.column_names: self.q_column_names.append( quote_with_backticks(column, self.sql_mode) ) for rows in self.retrieve_rows(num_conn): p = self.insert_rows(rows, new_db, destination, num_conn > 1) if p is not None: p.start() pthreads.append(p) if num_conn > 1: # Wait for all threads to finish for p in pthreads: p.join() # restoring the previous sql_mode, changed if the sql_mode in both # servers is different and one is "ANSI_QUOTES" if prev_sql_mode: self.server.exec_query("SET @@SESSION.SQL_MODE={0}" "".format(prev_sql_mode)) self.sql_mode = prev_sql_mode self.q_tbl_name = quote_with_backticks( self.tbl_name, self.sql_mode ) self.q_db_name = quote_with_backticks( self.db_name, self.sql_mode ) self.q_table = ".".join([self.q_db_name, self.q_tbl_name]) for column in self.column_names: self.q_column_names.append( quote_with_backticks(column, self.sql_mode) ) def retrieve_rows(self, num_conn=1): """Retrieve the table data in rows. This method can be used to retrieve rows from a table as a generator specifying how many rows to retrieve at one time (segment_size is calculated based on number of rows / number of connections). Note: if num_conn < 1 - retrieve the data one row at-a-time num_conn[in] Number of threads(connections) to use Default = 1 (one large segment) Returns (yield) row data """ if num_conn > 1: # Only get the segment size when needed. segment_size = self.get_segment_size(num_conn) # Execute query to get all of the data cur = self.server.exec_query("SELECT * FROM {0}".format(self.q_table), self.query_options) while True: rows = None if num_conn < 1: rows = [] row = cur.fetchone() if row is None: raise StopIteration() rows.append(row) elif num_conn == 1: rows = cur.fetchall() yield rows raise StopIteration() else: rows = cur.fetchmany(segment_size) if not rows: raise StopIteration() if rows is None: raise StopIteration() yield rows cur.close() def get_dest_values(self, destination=None): """Get the destination connection values if not already set. destination[in] Connection values for destination server Returns connection values for destination if set or self.server """ # Get connection to database if destination is None: conn_val = { "host": self.server.host, "user": self.server.user, "passwd": self.server.passwd, "unix_socket": self.server.socket, "port": self.server.port } else: conn_val = { "host": destination.host, "user": destination.user, "passwd": destination.passwd, "unix_socket": destination.socket, "port": destination.port } return conn_val def get_tbl_indexes(self): """Return a result set containing all indexes for a given table Returns result set """ res = self.server.exec_query("SHOW INDEXES FROM %s" % self.q_table) return res def get_tbl_foreign_keys(self): """Return a result set containing all foreign keys for the table Returns result set """ res = self.server.exec_query(_FOREIGN_KEY_QUERY % (self.db_name, self.tbl_name)) return res @staticmethod def __append(indexes, index): """Encapsulated append() method to ensure the primary key index is placed at the front of the list. """ # Put the primary key first so that it can be compared to all indexes if index.name == "PRIMARY": indexes.insert(0, index) else: indexes.append(index) @staticmethod def __check_index(index, indexes, master_list): """Check a single index for duplicate or redundancy against a list of other Indexes. index[in] The Index to compare indexes[in] A list of Index instances to compare master_list[in] A list of know duplicate Index instances Returns a tuple of whether duplicates are found and if found the list of duplicate indexes for this table """ duplicates_found = False duplicate_list = [] if indexes and index: for idx in indexes: if index == idx: continue # Don't compare b == a when a == b has already occurred if not index.compared and idx.is_duplicate(index): # make sure we haven't already found this match if not idx.column_subparts: idx.compared = True if idx not in master_list: duplicates_found = True # PRIMARY key can be identified as redundant of an # unique index with more columns, in that case always # mark the other as the duplicate. if idx.name == "PRIMARY": index.duplicate_of = idx duplicate_list.append(index) else: idx.duplicate_of = index duplicate_list.append(idx) return (duplicates_found, duplicate_list) def __check_index_list(self, indexes): """Check a list of Index instances for duplicates. indexes[in] A list of Index instances to compare Returns a tuple of whether duplicates are found and if found the list of duplicate indexes for this table """ duplicates_found = False duplicate_list = [] # Caller must ensure there are at least 2 elements in the list. if len(indexes) < 2: return (False, None) for index in indexes: res = self.__check_index(index, indexes, duplicate_list) if res[0]: duplicates_found = True duplicate_list.extend(res[1]) return (duplicates_found, duplicate_list) def __check_clustered_index_list(self, indexes): """ Check for indexes containing the clustered index from the list. indexes[in] list of indexes instances to check. Returns the list of indexes that contain the clustered index or None (if none found). """ redundant_indexes = [] if not self.pri_idx: self.get_primary_index() pri_idx_cols = [col[0] for col in self.pri_idx] for index in indexes: if index.name == 'PRIMARY': # Skip primary key. continue elif index.contains_columns(pri_idx_cols): redundant_indexes.append(index) return redundant_indexes if redundant_indexes else [] def _get_index_list(self): """Get the list of indexes for a table. Returns list containing indexes. """ rows = self.get_tbl_indexes() return rows def get_primary_index(self): """Retrieve the primary index columns for this table. """ pri_idx = [] rows = self.server.exec_query("EXPLAIN {0}".format(self.q_table)) # Return False if no indexes found. if not rows: return pri_idx for row in rows: if row[3] == 'PRI': pri_idx.append(row) self.pri_idx = pri_idx return pri_idx def get_column_explanation(self, column_name): """Retrieve the explain description for the given column. """ column_exp = [] rows = self.server.exec_query("EXPLAIN {0}".format(self.q_table)) # Return False if no indexes found. if not rows: return column_exp for row in rows: if row[0] == column_name: column_exp.append(row) return column_exp def get_indexes(self): """Retrieve the indexes from the server and load them into lists based on type. Returns True - table has indexes, False - table has no indexes """ self.btree_indexes = [] self.hash_indexes = [] self.rtree_indexes = [] self.fulltext_indexes = [] self.indexes_q_names = [] if self.verbose: print "# Getting indexes for %s" % (self.table) rows = self._get_index_list() # Return False if no indexes found. if not rows: return False idx = None prev_name = "" for row in rows: if (row[2] != prev_name) or (prev_name == ""): prev_name = row[2] idx = Index(self.db_name, row, sql_mode=self.sql_mode) if idx.type == "BTREE": self.__append(self.btree_indexes, idx) elif idx.type == "HASH": self.__append(self.hash_indexes, idx) elif idx.type == "RTREE": self.__append(self.rtree_indexes, idx) else: self.__append(self.fulltext_indexes, idx) elif idx: idx.add_column(row[4], row[7], row[9]) self.indexes_q_names.append(quote_with_backticks(row[2], self.sql_mode)) return True def check_indexes(self, show_drops=False): """Check for duplicate or redundant indexes and display all matches show_drops[in] (optional) If True the DROP statements are printed Note: You must call get_indexes() prior to calling this method. If get_indexes() is not called, no duplicates will be found. """ dupes = [] res = self.__check_index_list(self.btree_indexes) # if there are duplicates, add them to the dupes list if res[0]: dupes.extend(res[1]) res = self.__check_index_list(self.hash_indexes) # if there are duplicates, add them to the dupes list if res[0]: dupes.extend(res[1]) res = self.__check_index_list(self.rtree_indexes) # if there are duplicates, add them to the dupes list if res[0]: dupes.extend(res[1]) res = self.__check_index_list(self.fulltext_indexes) # if there are duplicates, add them to the dupes list if res[0]: dupes.extend(res[1]) # Check if secondary keys contains the clustered index (i.e. Primary # key). In InnoDB, each record in a secondary index contains the # primary key columns. Therefore the use of keys that include the # primary key might be redundant. redundant_idxs = [] if not self.storage_engine: self.storage_engine = self.get_storage_engine() if self.storage_engine == 'INNODB': all_indexes = self.btree_indexes all_indexes.extend(self.hash_indexes) all_indexes.extend(self.rtree_indexes) all_indexes.extend(self.fulltext_indexes) redundant_idxs = self.__check_clustered_index_list(all_indexes) # Print duplicate and redundant keys on composite indexes. if len(dupes) > 0: plural_1, verb_conj, plural_2 = ( ('', 'is a', '') if len(dupes) == 1 else ('es', 'are', 's') ) print("# The following index{0} {1} duplicate{2} or redundant " "for table {3}:".format(plural_1, verb_conj, plural_2, self.table)) for index in dupes: print("#") index.print_index_sql() print("# may be redundant or duplicate of:") index.duplicate_of.print_index_sql() if show_drops: print("#\n# DROP statement{0}:\n#".format(plural_2)) for index in dupes: print("{0};".format(index.get_drop_statement())) print("#") # Print redundant indexes containing clustered key. if redundant_idxs: plural, verb_conj, plural_2 = ( ('', 's', '') if len(redundant_idxs) == 1 else ('es', '', 's') ) print("# The following index{0} for table {1} contain{2} the " "clustered index and might be redundant:".format(plural, self.table, verb_conj)) for index in redundant_idxs: print("#") index.print_index_sql() if show_drops: print("#\n# DROP/ADD statement{0}:\n#".format(plural_2)) # Get columns from primary key to be removed. pri_idx_cols = [col[0] for col in self.pri_idx] for index in redundant_idxs: print("{0};".format( index.get_remove_columns_statement(pri_idx_cols) )) print("#") if not self.quiet and not dupes and not redundant_idxs: print("# Table {0} has no duplicate nor redundant " "indexes.".format(self.table)) def show_special_indexes(self, fmt, limit, best=False): """Display a list of the best or worst queries for this table. This shows the best (first n) or worst (last n) performing queries for a given table. fmt[in] format out output = sql, table, tab, csv limit[in] number to limit the display best[in] (optional) if True, print best performing indexes if False, print worst performing indexes """ _QUERY = """ SELECT t.TABLE_SCHEMA AS `db`, t.TABLE_NAME AS `table`, s.INDEX_NAME AS `index name`, s.COLUMN_NAME AS `field name`, s.SEQ_IN_INDEX `seq in index`, s2.max_columns AS `# cols`, s.CARDINALITY AS `card`, t.TABLE_ROWS AS `est rows`, ROUND(((s.CARDINALITY / IFNULL( IF(t.TABLE_ROWS < s.CARDINALITY, s.CARDINALITY, t.TABLE_ROWS), 0.01)) * 100), 2) AS `sel_percent` FROM INFORMATION_SCHEMA.STATISTICS s INNER JOIN INFORMATION_SCHEMA.TABLES t ON s.TABLE_SCHEMA = t.TABLE_SCHEMA AND s.TABLE_NAME = t.TABLE_NAME INNER JOIN ( SELECT TABLE_SCHEMA, TABLE_NAME, INDEX_NAME, MAX(SEQ_IN_INDEX) AS max_columns FROM INFORMATION_SCHEMA.STATISTICS WHERE TABLE_SCHEMA = %s AND TABLE_NAME = %s AND INDEX_NAME != 'PRIMARY' GROUP BY TABLE_SCHEMA, TABLE_NAME, INDEX_NAME ) AS s2 ON s.TABLE_SCHEMA = s2.TABLE_SCHEMA AND s.TABLE_NAME = s2.TABLE_NAME AND s.INDEX_NAME = s2.INDEX_NAME WHERE t.TABLE_SCHEMA != 'mysql' AND t.TABLE_ROWS > 10 /* Only tables with some rows */ AND s.CARDINALITY IS NOT NULL AND (s.CARDINALITY / IFNULL( IF(t.TABLE_ROWS < s.CARDINALITY, s.CARDINALITY, t.TABLE_ROWS), 0.01)) <= 1.00 ORDER BY `sel_percent` """ query_options = { 'params': (self.db_name, self.tbl_name,) } rows = [] idx_type = "best" if not best: idx_type = "worst" if best: rows = self.server.exec_query(_QUERY + "DESC LIMIT %s" % limit, query_options) else: rows = self.server.exec_query(_QUERY + "LIMIT %s" % limit, query_options) if rows: print("#") if limit == 1: print("# Showing the {0} performing index from " "{1}:".format(idx_type, self.table)) else: print("# Showing the top {0} {1} performing indexes from " "{2}:".format(limit, idx_type, self.table)) print("#") cols = ("database", "table", "name", "column", "sequence", "num columns", "cardinality", "est. rows", "percent") print_list(sys.stdout, fmt, cols, rows) else: print("# WARNING: Not enough data to calculate " "best/worst indexes.") @staticmethod def __print_index_list(indexes, fmt, no_header=False, verbosity=0): """Print the list of indexes indexes[in] list of indexes to print fmt[in] format out output = sql, table, tab, csv no_header[in] (optional) if True, do not print the header """ if fmt == "sql": for index in indexes: index.print_index_sql() else: if verbosity > 0: cols = ("database", "table", "name", "type", "unique", "accepts nulls", "columns") else: cols = ("database", "table", "name", "type", "columns") rows = [] for index in indexes: rows.append(index.get_row(verbosity)) print_list(sys.stdout, fmt, cols, rows, no_header) def print_indexes(self, fmt, verbosity): """Print all indexes for this table fmt[in] format out output = sql, table, tab, csv """ print "# Showing indexes from %s:\n#" % (self.table) if fmt == "sql": self.__print_index_list(self.btree_indexes, fmt, verbosity=verbosity) self.__print_index_list(self.hash_indexes, fmt, False, verbosity=verbosity) self.__print_index_list(self.rtree_indexes, fmt, False, verbosity=verbosity) self.__print_index_list(self.fulltext_indexes, fmt, False, verbosity=verbosity) else: master_indexes = [] master_indexes.extend(self.btree_indexes) master_indexes.extend(self.hash_indexes) master_indexes.extend(self.rtree_indexes) master_indexes.extend(self.fulltext_indexes) self.__print_index_list(master_indexes, fmt, verbosity=verbosity) print "#" def has_primary_key(self): """Check to see if there is a primary key. Returns bool - True - a primary key was found, False - no primary key. """ primary_key = False rows = self._get_index_list() for row in rows: if row[2] == "PRIMARY": primary_key = True return primary_key def has_unique_key(self): """Check to see if there is a unique key. Returns bool - True - a unique key was found, False - no unique key. """ unique_key = False rows = self._get_index_list() for row in rows: if row[1] == '0': unique_key = True return unique_key mysql-utilities-1.6.4/mysql/utilities/common/charsets.py0000644001577100752670000001056112747670311023233 0ustar pb2usercommon# # Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains the charset_info class designed to read character set and collation information from /share/charsets/index.xml. """ import sys from mysql.utilities.common.format import print_list _CHARSET_INDEXES = ID, CHARACTER_SET_NAME, COLLATION_NAME, MAXLEN, IS_DEFAULT \ = range(0, 5) _CHARSET_QUERY = """ SELECT CL.ID,CL.CHARACTER_SET_NAME,CL.COLLATION_NAME,CS.MAXLEN, CL.IS_DEFAULT FROM INFORMATION_SCHEMA.CHARACTER_SETS CS, INFORMATION_SCHEMA.COLLATIONS CL WHERE CS.CHARACTER_SET_NAME=CL.CHARACTER_SET_NAME ORDER BY CHARACTER_SET_NAME """ class CharsetInfo(object): """ Read character set information for lookup. Methods include: - get_charset_name(id) : get the name for a characterset id - get_default_collation(name) : get default collation name - get_name_by_collation(name) : given collation, find charset name - print_charsets() : print the character set map """ def __init__(self, options=None): """Constructor options[in] array of general options """ if options is None: options = {} self.verbosity = options.get("verbosity", 0) self.format = options.get("format", "grid") self.server = options.get("server", None) self.charset_map = None if self.server: self.charset_map = self.server.exec_query(_CHARSET_QUERY) def print_charsets(self): """Print the character set list """ print_list(sys.stdout, self.format, ["id", "character_set_name", "collation_name", "maxlen", "is_default"], self.charset_map) print len(self.charset_map), "rows in set." def get_name(self, chr_id): """Get the character set name for the given id chr_id[in] id for character set (as read from .frm file) Returns string - character set name or None if not found. """ for cs in self.charset_map: if int(chr_id) == int(cs[ID]): return cs[CHARACTER_SET_NAME] return None def get_collation(self, col_id): """Get the collation name for the given id col_id[in] id for collation (as read from .frm file) Returns string - collation name or None if not found. """ for cs in self.charset_map: if int(col_id) == int(cs[ID]): return cs[COLLATION_NAME] return None def get_name_by_collation(self, colname): """Get the character set name for the given collation colname[in] collation name Returns string - character set name or None if not found. """ for cs in self.charset_map: if cs[COLLATION_NAME] == colname: return cs[CHARACTER_SET_NAME] return None def get_default_collation(self, col_id): """Get the default collation for the character set col_id[in] id for collation (as read from .frm file) Returns tuple - (default collation id, name) or None if not found. """ # Exception for utf8 if col_id == 83: return "utf8_bin" for cs in self.charset_map: if int(cs[ID]) == int(col_id) and cs[IS_DEFAULT].upper() == "YES": return cs[COLLATION_NAME] return None def get_maxlen(self, col_id): """Get the maximum length for the character set col_id[in] id for collation (as read from .frm file) Returns int - max length or 1 if not found. """ for cs in self.charset_map: if int(cs[ID]) == int(col_id): return int(cs[MAXLEN]) return int(1) mysql-utilities-1.6.4/mysql/utilities/common/parser.py0000644001577100752670000006655312747670311022727 0ustar pb2usercommon# # Copyright (c) 2011, 2015, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """Module with parsers for General and Slow Query Log. """ import re import decimal import datetime from mysql.utilities.exception import LogParserError _DATE_PAT = r"\d{6}\s+\d{1,2}:\d{2}:\d{2}" _HEADER_VERSION_CRE = re.compile( r"(.+), Version: (\d+)\.(\d+)\.(\d+)(?:-(\S+))?") _HEADER_SERVER_CRE = re.compile(r"Tcp port:\s*(\d+)\s+Unix socket:\s+(.*)") _SLOW_TIMESTAMP_CRE = re.compile(r"#\s+Time:\s+(" + _DATE_PAT + r")") _SLOW_USERHOST_CRE = re.compile(r"#\s+User@Host:\s+" r"(?:([\w\d]+))?\s*" r"\[\s*([\w\d]+)\s*\]\s*" r"@\s*" r"([\w\d\.\-]*)\s*" r"\[\s*([\d.]*)\s*\]\s*" r"(?:Id\:\s*(\d+)?\s*)?") _SLOW_STATS_CRE = re.compile(r"#\sQuery_time:\s(\d*\.\d{1,6})\s*" r"Lock_time:\s(\d*\.\d{1,6})\s*" r"Rows_sent:\s(\d*)\s*" r"Rows_examined:\s(\d*)") _GENERAL_ENTRY_CRE = re.compile( r'(?:(' + _DATE_PAT + r'))?\s*' r'(\d+)\s([\w ]+)\t*(?:(.+))?$') class LogParserBase(object): """Base class for parsing MySQL log files LogParserBase should be inherited to create parsers for MySQL log files. This class has the following capabilities: - Take a stream and check whether it is a file type - Retrieve next line from stream - Parse header information from a log file (for General or Slow Query Log) - Implements the iterator protocol This class should not be used directly, but inhereted and extended to match the log file which needs to be parsed. """ def __init__(self, stream): """Constructor stream[in] A file type The stream argument must be a valid file type supporting for example the readline()-method. For example, the return of the buildin function open() can be used: LogParserBase(open("/path/to/mysql.log")) Raises LogParserError on errors. """ self._stream = None self._version = None self._program = None self._port = None self._socket = None self._start_datetime = None self._last_seen_datetime = None # Check if we got a file type line = None try: self._stream = stream line = self._get_next_line() except AttributeError: raise LogParserError("Need a file type") # Not every log file starts with a header if line is not None and line.endswith('started with:'): self._parse_header(line) else: self._stream.seek(0) def _get_next_line(self): """Get next line from the log file This method reads the next line from the stream. Trailing newline (\n) and carraige return (\r) are removed. Returns next line as string or None """ line = self._stream.readline() if not line: return None return line.rstrip('\r\n') def _parse_header(self, line): """Parse the header of a MySQL log file line[in] A string, usually result of self._get_next_line() This method parses the header of a MySQL log file, that is the header found in the General and Slow Query log files. It sets attributes _version, _program, _port and _socket. Note that headers can repeat in a log file, for example, after a restart of the MySQL server. Example header: /usr/sbin/mysqld, Version: 5.5.17-log (Source distribution). started with: Tcp port: 0 Unix socket: /tmp/mysql.sock Time Id Command Argument Raises LogParserError on errors. """ if line is None: return # Header line containing executable and version, example: # /raid0/mysql/mysql/bin/mysqld, # Version: 5.5.17-log (Source distribution). started with: info = _HEADER_VERSION_CRE.match(line) if not info: raise LogParserError("Could not read executable and version from " "header") program, major, minor, patch, extra = info.groups() # Header line with server information, example: # Tcp port: 3306 Unix socket: /tmp/mysql.sock line = self._get_next_line() info = _HEADER_SERVER_CRE.match(line) if not info: raise LogParserError("Malformed server header line: %s" % line) tcp_port, unix_socket = info.groups() # Throw away column header line, example: # Time Id Command Argument self._get_next_line() self._version = (int(major), int(minor), int(patch), extra) self._program = program self._port = int(tcp_port) self._socket = unix_socket @property def version(self): """Returns the MySQL server version This property returns a tuple descriving the version of the MySQL server producing the log file. The tuple looks like this: (major, minor, patch, extra) The extra part is optional and when not available will be None. Examples: (5,5,17,'log') (5,1,57,None) Note that the version can change in the same log file. Returns a tuple or None. """ return self._version @property def program(self): """Returns the executable which wrote the log file This property returns the full path to the executable which produced the log file. Note that the executable can change in the same log file. Returns a string or None. """ return self._program @property def port(self): """Returns the MySQL server TCP/IP port This property returns the TCP/IP port on which the MySQL server was listening. Note that the TCP/IP port can change in the same log file. Returns an integer or None. """ return self._port @property def socket(self): """Returns the MySQL server UNIX socket This property returns full path to UNIX socket used the MySQL server to accept incoming connections on UNIX-like servers. Note that the UNIX socket location can change in the same log file. Returns a string or None. """ return self._socket @property def start_datetime(self): """Returns timestamp of first read log entry This property returns the timestamp of the first read log entry. Returns datetime.datetime-object or None. """ return self._start_datetime @property def last_seen_datetime(self): """Returns timestamp of last read log entry This property returns the timestamp of the last read log entry. Returns datetime.datetime-object or None """ return self._last_seen_datetime def __iter__(self): """Class is iterable Returns a LogParserBase-object. """ return self def next(self): """Returns the next log entry Raises StopIteration when no more entries are available. Returns a LogEntryBase-object. """ entry = self._parse_entry() if entry is None: raise StopIteration return entry def _parse_entry(self): """Returns a parsed log entry """ pass def __str__(self): """String representation of LogParserBase """ return "<%(clsname)s, MySQL v%(version)s>" % dict( clsname=self.__class__.__name__, version='.'.join([str(v) for v in self._version[0:3]]) + (self._version[3] or '') ) class GeneralQueryLog(LogParserBase): """Class implementing a parser for the MySQL General Query Log The GeneralQueryLog-class implements a parse for the MySQL General Query Log and has the following capabilities: - Parse General Query Log entries - Possibility to handle special commands - Keep track of MySQL sessions and remove them - Process log headers found later in the log file """ def __init__(self, stream): """Constructor stream[in] file type Raises LogParserError on errors. """ super(GeneralQueryLog, self).__init__(stream) self._sessions = {} self._cached_logentry = None self._commands = { # 'Sleep': None, 'Quit': self._handle_quit, 'Init DB': self._handle_init_db, 'Query': self._handle_multi_line, # 'Field List': None, # 'Create DB': None, # 'Drop DB': None, # 'Refresh': None, # 'Shutdown': None, # 'Statistics': None, # 'Processlist': None, 'Connect': self._handle_connect, # 'Kill': None, # 'Debug': None, # 'Ping': None, # 'Time': None, # 'Delayed insert': None, # 'Change user': None, # 'Binlog Dump': None, # 'Table Dump': None, # 'Connect Out': None, # 'Register Slave': None, 'Prepare': self._handle_multi_line, 'Execute': self._handle_multi_line, # 'Long Data': None, # 'Close stmt': None, # 'Reset stmt': None, # 'Set option': None, 'Fetch': self._handle_multi_line, # 'Daemon': None, # 'Error': None, } def _new_session(self, session_id): """Create a new session using the given session ID session_id[in] integer presenting a MySQL session Returns a dictionary. """ self._sessions[session_id] = dict( database=None, user=None, host=None, time_last_action=None, to_delete=False ) return self._sessions[session_id] @staticmethod def _handle_connect(entry, session, argument): """Handle a 'Connect'-command entry[in] a GeneralQueryLogEntry-instance session[in] a dictionary with current session information, element of self._sessions argument[in] a string, last part of a log entry This method reads user and database information from the argument of a 'Connect'-command. It sets the user, host and database for the current session and also sets the argument for the entry. """ # Argument can be as follows: # root@localhost on test # root@localhost on try: connection, _, database = argument.split(' ') except ValueError: connection = argument.replace(' on', '') database = None session['user'], session['host'] = connection.split('@') session['database'] = database entry['argument'] = argument @staticmethod def _handle_init_db(entry, session, argument): """Handle an 'Init DB'-command entry[in] a GeneralQueryLogEntry-instance session[in] a dictionary with current session information, element of self._sessions argument[in] a string, last part of a log entry The argument parameter is always the database name. """ # Example (of full line): # 3 Init DB mysql session['database'] = argument entry['argument'] = argument def _handle_multi_line(self, entry, session, argument): """Handle a command which can span multiple lines entry[in] a GeneralQueryLogEntry-instance session[in] a dictionary with current session information, element of self._sessions argument[in] a string, last part of a log entry The argument parameter passed to this function is the last part of a General Query Log entry and usually is already the full query. This function's main purpose is to read log entries which span multiple lines, such as the Query and Prepare-commands. """ # Examples: # 111205 10:01:14 6 Query SELECT Name FROM time_zone_name # WHERE Time_zone_id = 417 # 111205 10:03:28 6 Query SELECT Name FROM time_zone_name # WHERE Time_zone_id = 417 argument_parts = [argument, ] line = self._get_next_line() # Next line is None if the end of the file is reached. # Note: empty lines can appear and should be read (i.e., line == ''). while line is not None: # Stop if it is a header. if line.endswith('started with:'): self._cached_logentry = line break # Stop if a new log entry is found. info = _GENERAL_ENTRY_CRE.match(line) if info is not None: self._cached_logentry = info.groups() break # Otherwise, append line and read next. argument_parts.append(line) line = self._get_next_line() entry['argument'] = '\n'.join(argument_parts) @staticmethod def _handle_quit(entry, session, argument): """Handle the 'Quit'-command entry[in] a GeneralQueryLogEntry-instance session[in] a dictionary with current session information, element of self._sessions argument[in] a string, last part of a log entry This function sets a flag that the session can be removed from the session list. """ # Example (of full line): # 111205 10:06:53 6 Quit session['to_delete'] = True def _parse_command(self, logentry, entry): """Parse a log entry from the General Query Log logentry[in] a string or tuple entry[in] an instance of GeneralQueryLogEntry The logentry-parameter is either a line read from the log file or the result of a previous attempt to read a command. The entry argument should be an instance of GeneralQueryLogEntry. It returns the entry or None if nothing could be read. Raises LogParserError on errors. Returns the GeneralQueryLogEntry-instance or None """ if logentry is None: return None if isinstance(logentry, tuple): dt, session_id, command, argument = logentry elif logentry.endswith('started with:'): while logentry.endswith('started with:'): # We got a header self._parse_header(logentry) logentry = self._get_next_line() if logentry is None: return None return self._parse_command(logentry, entry) else: info = _GENERAL_ENTRY_CRE.match(logentry) if info is None: raise LogParserError("Failed parsing command line: %s" % logentry) dt, session_id, command, argument = info.groups() self._cached_logentry = None session_id = int(session_id) entry['session_id'] = session_id try: session = self._sessions[session_id] except KeyError: session = self._new_session(session_id) entry['command'] = command if dt is not None: entry['datetime'] = datetime.datetime.strptime(dt, "%y%m%d %H:%M:%S") session['time_last_action'] = entry['datetime'] else: entry['datetime'] = session['time_last_action'] try: self._commands[command](entry, session, argument) except KeyError: # Generic command entry['argument'] = argument for key in entry.keys(): if key in session: entry[key] = session[key] if session['to_delete'] is True: del self._sessions[session_id] del session return entry def _parse_entry(self): """Returns a parsed log entry The method _parse_entry() uses _parse_command() to parse a General Query Log entry. It is used by the iterator protocol methods. Returns a GeneralQueryLogEntry-instance or None. """ entry = GeneralQueryLogEntry() if self._cached_logentry is not None: self._parse_command(self._cached_logentry, entry) return entry else: line = self._get_next_line() if line is None: return None self._parse_command(line, entry) return entry class SlowQueryLog(LogParserBase): """Class implementing a parser for the MySQL Slow Query Log The SlowQueryLog-class implements a parser for the MySQL Slow Query Log and has the following capabilities: - Parse Slow Query Log entries - Process log headers found later in the log file - Parse connection and temporal information - Get statistics of the slow query """ def __init__(self, stream): """Constructor stream[in] A file type The stream argument must be a valid file type supporting for example the readline()-method. For example, the return of the build-in function open() can be used: SlowQueryLog(open("/path/to/mysql-slow.log")) Raises LogParserError on errors. """ super(SlowQueryLog, self).__init__(stream) self._cached_line = None self._current_database = None @staticmethod def _parse_line(regex, line): """Parses a log line using given regular expression regex[in] a SRE_Match-object line[in] a string This function takes a log line and matches the regular expresion given with the regex argument. It returns the result of re.MatchObject.groups(), which is a tuple. Raises LogParserError on errors. Returns a tuple. """ info = regex.match(line) if info is None: raise LogParserError('Failed parsing Slow Query line: %s' % line[:30]) return info.groups() def _parse_connection_info(self, line, entry): """Parses connection info line[in] a string entry[in] a SlowQueryLog-instance The line paramater should be a string, a line read from the Slow Query Log. The entry argument should be an instance of SlowQueryLogEntry. Raises LogParserError on failure. """ # Example: # # User@Host: root[root] @ localhost [127.0.0.1] (priv_user, unpriv_user, host, ip, sid) = self._parse_line(_SLOW_USERHOST_CRE, line) entry['user'] = priv_user if priv_user else unpriv_user entry['host'] = host if host else ip entry['session_id'] = sid def _parse_timestamp(self, line, entry): """Parses a timestamp line[in] a string entry[in] a SlowQueryLog-instance The line paramater should be a string, a line read from the Slow Query Log. The entry argument should be an instance of SlowQueryLogEntry. Raises LogParserError on failure. """ # Example: # # Time: 111206 11:55:54 info = self._parse_line(_SLOW_TIMESTAMP_CRE, line) entry['datetime'] = datetime.datetime.strptime(info[0], "%y%m%d %H:%M:%S") if self._start_datetime is None: self._start_datetime = entry['datetime'] self._last_seen_datetime = entry['datetime'] def _parse_statistics(self, line, entry): """Parses statistics information line[in] a string entry[in] a SlowQueryLog-instance The line paramater should be a string, a line read from the Slow Query Log. The entry argument should be an instance of SlowQueryLogEntry. Raises LogParserError on errors. """ # Example statistic line: # Query_time: 0.101194 Lock_time: 0.000331 Rows_sent: 24 # Rows_examined: 11624 result = self._parse_line(_SLOW_STATS_CRE, line) entry['query_time'] = decimal.Decimal(result[0]) entry['lock_time'] = decimal.Decimal(result[1]) entry['rows_sent'] = int(result[2]) entry['rows_examined'] = int(result[3]) def _parse_query(self, line, entry): """Parses the query line[in] a string entry[in] a SlowQueryLog-instance The line paramater should be a string, a line read from the Slow Query Log. The entry argument should be an instance of SlowQueryLogEntry. Query entries in the Slow Query Log could span several lines. They can optionally start with a USE-command and have session variables, such as 'timestamp', set before the actual query. """ # Example: # SET timestamp=1323169459; # SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA # WHERE SCHEMA_NAME = 'mysql'; # # User@Host: root[root] @ localhost [127.0.0.1] query = [] while True: if line is None: break if line.startswith('use'): entry['database'] = self._current_database = line.split(' ')[1] elif line.startswith('SET timestamp='): entry['datetime'] = datetime.datetime.fromtimestamp( int(line[14:].strip(';'))) elif (line.startswith('# Time:') or line.startswith("# User@Host") or line.endswith('started with:')): break query.append(line) line = self._get_next_line() if 'database' in entry: # This is not always correct: connections without current database # will get the database name of the previous query. However, it's # more likely current database is set. Fix would be that the server # includes a USE-statement for every entry. if (entry['database'] is None and self._current_database is not None): entry['database'] = self._current_database entry['query'] = '\n'.join(query) self._cached_line = line def _parse_entry(self): """Parse and returns an entry of the Slow Query Log Each entry of the slow log consists of: 1. An optional time line 2. A connection information line with user, hostname and database 3. A line containing statistics for the query 4. An optional "use " line 5. A line setting the timestamp, insert_id, and last_insert_id session variables 6. An optional administartor command line "# administator command" 7. An optional SQL statement or the query Returns a SlowQueryLogEntry-instance or None """ if self._cached_line is not None: line = self._cached_line self._cached_line = None else: line = self._get_next_line() if line is None: return None while line.endswith('started with:'): # We got a header self._parse_header(line) line = self._get_next_line() if line is None: return None entry = SlowQueryLogEntry() if line.startswith('# Time:'): self._parse_timestamp(line, entry) line = self._get_next_line() if line.startswith('# User@Host:'): self._parse_connection_info(line, entry) line = self._get_next_line() if line.startswith('# Query_time:'): self._parse_statistics(line, entry) line = self._get_next_line() self._parse_query(line, entry) return entry class LogEntryBase(dict): """Class inherited by GeneralQueryEntryLog and SlowQueryEntryLog This class has the following capabilities: - Inherits from dict - Dictionary elements can be accessed using attributes. For example, logentry['database'] is accessible like logentry.database Should not be used directly. """ def __init__(self): super(LogEntryBase, self).__init__() self['datetime'] = None self['database'] = None self['user'] = None self['host'] = None self['session_id'] = None def __getattr__(self, name): if name in self: return self[name] else: raise AttributeError("%s has no attribute '%s'" % (self.__class__.__name__, name)) class GeneralQueryLogEntry(LogEntryBase): """Class representing an entry of the General Query Log """ def __init__(self): """Constructor GeneralQueryLogEntry inherits from LogEntryBase, which inherits from dict. Instances of GeneralQueryLogEntry can be used just like dictionaries. """ super(GeneralQueryLogEntry, self).__init__() self['session_id'] = None self['command'] = None self['argument'] = None def __str__(self): """String representation of GeneralQueryLogEntry """ param = self.copy() param['clsname'] = self.__class__.__name__ try: if len(param['argument']) > 30: param['argument'] = param['argument'][:28] + '..' except TypeError: pass # Nevermind when param['argument'] was not a string. try: param['datetime'] = param['datetime'].strftime("%Y-%m-%d %H:%M:%S") except AttributeError: param['datetime'] = '' return ("<%(clsname)s %(datetime)s [%(session_id)s]" " %(command)s: %(argument)s>" % param) class SlowQueryLogEntry(LogEntryBase): """Class representing an entry of the Slow Query Log SlowQueryLogEntry inherits from LogEntryBase, which inherits from dict. Instances of SlowQueryLogEntry can be used just like dictionaries. """ def __init__(self): """Constructor """ super(SlowQueryLogEntry, self).__init__() self['query'] = None self['query_time'] = None self['lock_time'] = None self['rows_examined'] = None self['rows_sent'] = None def __str__(self): """String representation of SlowQueryLogEntry """ param = self.copy() param['clsname'] = self.__class__.__name__ try: param['datetime'] = param['datetime'].strftime("%Y-%m-%d %H:%M:%S") except AttributeError: param['datetime'] = '' return ("<%(clsname)s %(datetime)s [%(user)s@%(host)s] " "%(query_time)s/%(lock_time)s/%(rows_examined)s/%(rows_sent)s>" ) % param mysql-utilities-1.6.4/mysql/utilities/common/messages.py0000644001577100752670000002204412747670311023225 0ustar pb2usercommon# # Copyright (c) 2013, 2016 Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains output string messages used by MySQL Utilities. """ EXTERNAL_SCRIPT_DOES_NOT_EXIST = ("'{path}' script cannot be found. Please " "check the path and filename for accuracy " "and try again.") ERROR_ANSI_QUOTES_MIX_SQL_MODE = ("One or more servers have SQL mode set to " "ANSI_QUOTES, the {utility} requires to all " "or none of the servers to be set with the " "SQL mode set to ANSI_QUOTES.") ERROR_USER_WITHOUT_PRIVILEGES = ("User '{user}' on '{host}@{port}' does not " "have sufficient privileges to " "{operation} (required: {req_privileges}).") PARSE_ERR_DB_PAIR = ("Cannot parse the specified database(s): '{db_pair}'. " "Please verify that the database(s) are specified in " "a valid format (i.e., {db1_label}[:{db2_label}]) and " "that backtick quotes are properly used when required.") PARSE_ERR_DB_PAIR_EXT = ("%s The use of backticks is required if non " "alphanumeric characters are used for database " "names. Parsing the specified database results " "in {db1_label} = '{db1_value}' and " "{db2_label} = '{db2_value}'." % PARSE_ERR_DB_PAIR) PARSE_ERR_DB_OBJ_PAIR = ("Cannot parse the specified database objects: " "'{db_obj_pair}'. Please verify that the objects " "are specified in a valid format (i.e., {db1_label}" "[.{obj1_label}]:{db2_label}[.{obj2_label}]) and " "that backtick quotes are properly used if " "required.") PARSE_ERR_DB_OBJ_PAIR_EXT = ("%s The use of backticks is required if non " "alphanumeric characters are used for identifier " "names. Parsing the specified objects results " "in: {db1_label} = '{db1_value}', " "{obj1_label} = '{obj1_value}', " "{db2_label} = '{db2_value}' and " "{obj2_label} = '{obj2_value}'." % PARSE_ERR_DB_OBJ_PAIR) PARSE_ERR_DB_OBJ_MISSING_MSG = ("Incorrect object compare argument, one " "specific object is missing. Please verify " "that both object are correctly specified. " "{detail} Format should be: " "{db1_label}[.{obj1_label}]" ":{db2_label}[.{obj2_label}].") PARSE_ERR_DB_OBJ_MISSING = ("No object has been specified for " "{db_no_obj_label} '{db_no_obj_value}', while " "object '{only_obj_value}' was specified for " "{db_obj_label} '{db_obj_value}'.") PARSE_ERR_DB_MISSING_CMP = ("You must specify at least one database to " "compare or use the --all option to compare all " "databases.") PARSE_ERR_OBJ_NAME_FORMAT = ("Cannot parse the specified qualified name " "'{obj_name}' for {option}. Please verify that a " "valid format is used (i.e., " "[.]) and that backtick quotes are " "properly used if required.") PARSE_ERR_SPAN_KEY_SIZE_TOO_HIGH = ( "The value {s_value} specified for option --span-key-size is too big. It " "must be smaller or equal than {max} (size of the key hash values for " "comparison).") PARSE_ERR_SPAN_KEY_SIZE_TOO_LOW = ( "The value {s_value} specified for option --span-key-size is too small " "and would cause inaccurate results, please retry with a bigger value " "or the default value of {default}.") PARSE_ERR_OPT_INVALID_CMD = "Invalid {opt} option for '{cmd}'." PARSE_ERR_OPT_INVALID_CMD_TIP = ("%s Use {opt_tip} instead." % PARSE_ERR_OPT_INVALID_CMD) PARSE_ERR_OPT_INVALID_DATE = "Invalid {0} date format (yyyy-mm-dd): {1}" PARSE_ERR_OPT_INVALID_DATE_TIME = ("Invalid {0} date/time format " "(yyyy-mm-ddThh:mm:ss): {1}") PARSE_ERR_OPT_INVALID_NUM_DAYS = ("Invalid number of days (must be an integer " "greater than zero) for {0} date: {1}") PARSE_ERR_OPT_INVALID_VALUE = ("The value for option {option} is not valid: " "'{value}'.") PARSE_ERR_OPT_REQ_NON_NEGATIVE_VALUE = ("Option '{opt}' requires a " "non-negative value.") PARSE_ERR_OPT_REQ_GREATER_VALUE = ("Option '{opt}' requires a value greater " "than {val}.") PARSE_ERR_OPT_REQ_VALUE = "Option '{opt}' requires a non-empty value." PARSE_ERR_OPT_REQ_OPT = ("Option {opt} requires the following option(s): " "{opts}.") PARSE_ERR_OPTS_EXCLD = ("Options {opt1} and {opt2} cannot be used " "together.") PARSE_ERR_OPTS_REQ = "Option '{opt}' is required." PARSE_ERR_OPTS_REQ_BY_CMD = ("'{cmd}' requires the following option(s): " "{opts}.") PARSE_ERR_SLAVE_DISCO_REQ = ("Option --discover-slaves-login or --slaves is " "required.") PARSE_ERR_OPTS_REQ_GREATER_OR_EQUAL = ("The {opt} option requires a value " "greater than or equal to {value}.") WARN_OPT_NOT_REQUIRED = ("WARNING: The {opt} option is not required for " "'{cmd}' (option ignored).") WARN_OPT_NOT_REQUIRED_ONLY_FOR = ("%s Only used with the {only_cmd} command." % WARN_OPT_NOT_REQUIRED) WARN_OPT_NOT_REQUIRED_FOR_TYPE = ( "# WARNING: The {opt} option is not required for the {type} type " "(option ignored).") WARN_OPT_ONLY_USED_WITH = ("# WARNING: The {opt} option is only used with " "{used_with} (option ignored).") WARN_OPT_USING_DEFAULT = ("WARNING: Using default value '{default}' for " "option {opt}.") ERROR_SAME_MASTER = ("The specified new master {n_master_host}:{n_master_port}" " is the same as the " "actual master {master_host}:{master_port}.") SLAVES = "slaves" CANDIDATES = "candidates" ERROR_MASTER_IN_SLAVES = ("The master {master_host}:{master_port} " "and one of the specified {slaves_candidates} " "are the same {slave_host}:{slave_port}.") SCRIPT_THRESHOLD_WARNING = ("WARNING: You have chosen to use external script " "return code checking. Depending on which script " "fails, this can leave the operation in an " "undefined state. Please check your results " "carefully if the operation aborts.") HOST_IP_WARNING = ("You may be mixing host names and IP addresses. This may " "result in negative status reporting if your DNS services " "do not support reverse name lookup.") ERROR_MIN_SERVER_VERSIONS = ("The {utility} requires server versions greater " "or equal than {min_version}. Server version for " "'{host}:{port}' is not supported.") PARSE_ERR_SSL_REQ_SERVER = ("Options --ssl-ca, --ssl-cert and --ssl-key " "requires use of --server.") WARN_OPT_SKIP_INNODB = ("The use of InnoDB is mandatory since MySQL 5.7. The " "former options like '--innodb=0/1/OFF/ON' or " "'--skip-innodb' are ignored.") FILE_DOES_NOT_EXIST = "The following path is invalid, '{path}'." INSUFFICIENT_FILE_PERMISSIONS = ("You do not have permission to {permissions} " "file '{path}'.") MSG_UTILITIES_VERSION = "MySQL Utilities {utility} version {version}." MSG_MYSQL_VERSION = "Server '{server}' is using MySQL version {version}." USER_PASSWORD_FORMAT = ("Format of {0} option is incorrect. Use userid:passwd " "or userid.") mysql-utilities-1.6.4/mysql/utilities/common/gtid.py0000644001577100752670000001741512747670311022353 0ustar pb2usercommon# # Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains function to manipulate GTIDs. """ def get_last_server_gtid(gtid_set, server_uuid): """Get the last GTID of the specified GTID set for the given server UUID. This function retrieves the last GTID from the specified set for the specified server UUID. In more detail, it returns the GTID with the greater sequence value that matches the specified UUID. Note: The method assumes that GTID sets are grouped by UUID (separated by comma ',') and intervals appear in ascending order (i.e., the last one is the greater one). gtid_set[in] GTID set to search and get last (greater) GTID value. server_uuid[in] Server UUID to match, as a GTID set might contain data for different servers (UUIDs). Returns a string with the last GTID value in the set for the given server UUID in the format 'uuid:n'. If no GTID are found in the set for the specified server UUID then None is returned. """ uuid_sets = gtid_set.split(',') for uuid_set in uuid_sets: uuid_set_elements = uuid_set.strip().split(':') # Note: UUID are case insensitive, but can appear with mixed cases for # some server versions (e.g., for 5.6.9, lower case in server_id # variable and upper case in GTID_EXECUTED set). if uuid_set_elements[0].lower() == server_uuid.lower(): last_interval = uuid_set_elements[-1] try: _, end_val = last_interval.split('-') return '{0}:{1}'.format(server_uuid, end_val) except ValueError: # Error raised for single values (not an interval). return '{0}:{1}'.format(server_uuid, last_interval) return None def gtid_set_cardinality(gtid_set): """Determine the cardinality of the specified GTID set. This function counts the number of elements in the specified GTID set. gtid_set[in] target set of GTIDs to determine the cardinality. Returns the number of elements of the specified GTID set. """ count = 0 uuid_sets = gtid_set.split(',') for uuid_set in uuid_sets: intervals = uuid_set.strip().split(':')[1:] for interval in intervals: try: start_val, end_val = interval.split('-') count = count + int(end_val) - int(start_val) + 1 except ValueError: # Error raised for single values (not an interval). count += 1 return count def gtid_set_union(gtid_set_a, gtid_set_b): """Perform the union of two GTID sets. This method computes the union of two GTID sets and returns the result of the operation. Note: This method support input GTID sets not in the normalized form, i.e., with unordered and repeated UUID sets and intervals, but with a valid syntax. gtid_set_a[in] First GTID set (set A). gtid_set_b[in] Second GTID set (set B). Returns a string with the result of the set union operation between the two given GTID sets. """ def get_gtid_dict(gtid_a, gtid_b): """Get a dict representation of the specified GTID sets. Combine the given GTID sets into a single dict structure, removing duplicated UUIDs and string intervals. Return a dictionary (not normalized) with the GTIDs contained in both input GTID sets. For example, for the given (not normalized) GTID sets 'uuid_a:2:5-7,uuid_b:4' and 'uuid_a:2:4-6:2,uuid_b:1-3' the follow dict will be returned: {'uuid_a': set(['2', '5-7', '4-6']), 'uuid_b': set(['4','1-3'])} """ res_dict = {} uuid_sets_a = gtid_a.split(',') uuid_sets_b = gtid_b.split(',') uuid_sets = uuid_sets_a + uuid_sets_b for uuid_set in uuid_sets: uuid_set_values = uuid_set.split(':') uuid_key = uuid_set_values[0] if uuid_key in res_dict: res_dict[uuid_key] = \ res_dict[uuid_key].union(uuid_set_values[1:]) else: res_dict[uuid_key] = set(uuid_set_values[1:]) return res_dict # Create auxiliary dict representation of both input GTID sets. gtid_dict = get_gtid_dict(gtid_set_a, gtid_set_b) # Perform the union between the GTID sets. union_gtid_list = [] for uuid in gtid_dict: intervals = gtid_dict[uuid] # Convert the set of string intervals into a single list of tuples # with integers, in order to be handled easily. intervals_list = [] for values in intervals: interval = values.split('-') intervals_list.append((int(interval[0]), int(interval[-1]))) # Compute the union of the tuples (intervals). union_set = [] for start, end in sorted(intervals_list): # Note: no interval start before the next one (ordered list). if union_set and start <= union_set[-1][1] + 1: # Current interval intersects or is consecutive to the last # one in the results. if union_set[-1][1] < end: # If the end of the interval is greater than the last one # then augment it (set the new end), otherwise do nothing # (meaning the interval is fully included in the last one). union_set[-1] = (union_set[-1][0], end) else: # No interval in the results or the interval does not intersect # nor is consecutive to the last one, then add it to the end of # the results list. union_set.append((start, end)) # Convert resulting union set to a valid string format. union_str = ":".join( ["{0}-{1}".format(vals[0], vals[1]) if vals[0] != vals[1] else str(vals[0]) for vals in union_set] ) # Concatenate UUID and add the to the result list. union_gtid_list.append("{0}:{1}".format(uuid, union_str)) # GTID sets are sorted alphabetically, return the result accordingly. return ','.join(sorted(union_gtid_list)) def gtid_set_itemize(gtid_set): """Itemize the given GTID set. Decompose the given GTID set into a list of individual GTID items grouped by UUID. gtid_set[in] GTID set to itemize. Return a list of tuples with the UUIDs and transactions number for all individual items in the GTID set. For example: 'uuid_a:1-3:5,uuid_b:4' is converted into [('uuid_a', [1, 2, 3, 5]), ('uuid_b', [4])]. """ gtid_list = [] uuid_sets = gtid_set.split(',') for uuid_set in uuid_sets: uuid_set_elements = uuid_set.split(':') trx_num_list = [] for interval in uuid_set_elements[1:]: try: start_val, end_val = interval.split('-') trx_num_list.extend(range(int(start_val), int(end_val) + 1)) except ValueError: # Error raised for single values (not an interval). trx_num_list.append(int(interval)) gtid_list.append((uuid_set_elements[0], trx_num_list)) return gtid_list mysql-utilities-1.6.4/mysql/utilities/common/audit_log_reader.py0000644001577100752670000001730012747670311024706 0ustar pb2usercommon# # Copyright (c) 2012, 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the methods for reading the audit log. """ import os import xml.etree.ElementTree as xml from mysql.utilities.exception import UtilError # Import appropriate XML exception to be compatible with python 2.6. try: # Exception only available from python 2.7 (i.e., ElementTree 1.3) # pylint: disable=E0611 from xml.etree.ElementTree import ParseError except ImportError: # Instead use ExpatError for earlier python versions. from xml.parsers.expat import ExpatError as ParseError # Fields for the old format. _MANDATORY_FIELDS = ['NAME', 'TIMESTAMP'] _OPTIONAL_FIELDS = ['CONNECTION_ID', 'DB', 'HOST', 'IP', 'MYSQL_VERSION', 'OS_LOGIN', 'OS_VERSION', 'PRIV_USER', 'PROXY_USER', 'SERVER_ID', 'SQLTEXT', 'STARTUP_OPTIONS', 'STATUS', 'USER', 'VERSION'] # Fields for the new format. _NEW_MANDATORY_FIELDS = _MANDATORY_FIELDS + ['RECORD_ID'] _NEW_OPTIONAL_FIELDS = _OPTIONAL_FIELDS + ['COMMAND_CLASS', 'STATUS_CODE'] class AuditLogReader(object): """The AuditLogReader class is used to read the data stored in the audit log file. This class provide methods to open the audit log, get the next record, and close the file. """ def __init__(self, options=None): """Constructor options[in] dictionary of options (e.g. log_name and verbosity) """ if options is None: options = {} self.verbosity = options.get('verbosity', 0) self.log_name = options.get('log_name', None) self.log = None self.tree = None self.root = None self.remote_file = False def __del__(self): """Destructor """ if self.remote_file: os.unlink(self.log_name) def open_log(self): """Open the audit log file. """ # Get the log from a remote server # TODO : check to see if the log is local. If not, attempt # to log into the server via rsh and copy the file locally. self.remote_file = False if not self.log_name or not os.path.exists(self.log_name): raise UtilError("Cannot read log file '%s'." % self.log_name) self.log = open(self.log_name) def close_log(self): """Close the previously opened audit log. """ self.log.close() @staticmethod def _validXML(line): """Check if line is a valid XML element, apart from audit records. """ if ('' in line) or ('' in line): return True else: return False def get_next_record(self): """Get the next audit log record. Generator function that return the next audit log record. More precisely, it returns a tuple with a formatted record dict and the original record. """ next_line = "" new_format = False multiline = False for line in self.log: if line.lstrip().startswith(''): # Found first record line in the new format. new_format = True multiline = True next_line = line continue elif (line.lstrip().startswith('\n')): # Found (first) record line in the old format. next_line = "{0} ".format(line.strip('\n')) if not line.endswith('/>\n'): multiline = True continue elif multiline: if ((new_format and line.strip().endswith('')) or (not new_format and line.endswith('/>\n'))): # Detect end of record in the old and new format and # append last record line. next_line += line else: if not line.strip().startswith('<'): # Handle SQL queries broke into multiple lines, # removing newline characters. next_line = '{0}{1}'.format(next_line.strip('\n'), line.strip('\n')) else: next_line += line continue else: next_line += line log_entry = next_line next_line = "" try: yield ( self._make_record(xml.fromstring(log_entry), new_format), log_entry ) except (ParseError, SyntaxError): # SyntaxError is also caught for compatibility reasons with # python 2.6. In case an ExpatError which does not inherits # from SyntaxError is used as a ParseError. if not self._validXML(log_entry): raise UtilError("Malformed XML - Cannot parse log file: " "'{0}'\nInvalid XML element: " "{1!r}".format(self.log_name, log_entry)) @staticmethod def _do_replacements(old_str): """Replace special masked characters. """ new_str = old_str.replace("<", "<") new_str = new_str.replace(">", ">") new_str = new_str.replace(""", '"') new_str = new_str.replace("&", "&") return new_str def _make_record(self, node, new_format=False): """Make a dictionary record from the node element. The given node is converted to a dictionary record, reformatting as needed for the special characters. node[in] XML node holding a single audit log record. new_format[in] Flag indicating if the new XML format is used for the audit log record. By default False (old format used). Return a dictionary with the data in the given audit log record. """ if new_format: # Handle audit record in the new format. # Do mandatory fields. # Note: Use dict constructor for compatibility with Python 2.6. record = dict((field, node.find(field).text) for field in _NEW_MANDATORY_FIELDS) # Do optional fields. for field in _NEW_OPTIONAL_FIELDS: field_node = node.find(field) if field_node is not None and field_node.text: record[field] = self._do_replacements(field_node.text) else: # Handle audit record in the old format. # Do mandatory fields. # Note: Use dict constructor for compatibility with Python 2.6. record = dict((field, node.get(field)) for field in _MANDATORY_FIELDS) # Do optional fields. for field in _OPTIONAL_FIELDS: if node.get(field, None): record[field] = self._do_replacements(node.get(field)) return record mysql-utilities-1.6.4/mysql/utilities/common/database.py0000755001577100752670000024204412747670311023171 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains abstractions of a MySQL Database object used by multiple utilities. """ import multiprocessing import os import re import sys from collections import deque from mysql.utilities.exception import UtilError, UtilDBError from mysql.utilities.common.pattern_matching import parse_object_name from mysql.utilities.common.options import obj2sql from mysql.utilities.common.server import connect_servers, Server from mysql.utilities.common.user import User from mysql.utilities.common.sql_transform import (quote_with_backticks, remove_backtick_quoting, is_quoted_with_backticks) # List of database objects for enumeration _DATABASE, _TABLE, _VIEW, _TRIG, _PROC, _FUNC, _EVENT, _GRANT = "DATABASE", \ "TABLE", "VIEW", "TRIGGER", "PROCEDURE", "FUNCTION", "EVENT", "GRANT" _OBJTYPE_QUERY = """ ( SELECT TABLE_TYPE as object_type FROM INFORMATION_SCHEMA.TABLES WHERE TABLES.TABLE_SCHEMA = '%(db_name)s' AND TABLES.TABLE_NAME = '%(obj_name)s' ) UNION ( SELECT 'TRIGGER' as object_type FROM INFORMATION_SCHEMA.TRIGGERS WHERE TRIGGER_SCHEMA = '%(db_name)s' AND TRIGGER_NAME = '%(obj_name)s' ) UNION ( SELECT TYPE as object_type FROM mysql.proc WHERE DB = '%(db_name)s' AND NAME = '%(obj_name)s' ) UNION ( SELECT 'EVENT' as object_type FROM mysql.event WHERE DB = '%(db_name)s' AND NAME = '%(obj_name)s' ) """ _DEFINITION_QUERY = """ SELECT %(columns)s FROM INFORMATION_SCHEMA.%(table_name)s WHERE %(conditions)s """ _PARTITION_QUERY = """ SELECT PARTITION_NAME, SUBPARTITION_NAME, PARTITION_ORDINAL_POSITION, SUBPARTITION_ORDINAL_POSITION, PARTITION_METHOD, SUBPARTITION_METHOD, PARTITION_EXPRESSION, SUBPARTITION_EXPRESSION, PARTITION_DESCRIPTION FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA = '%(db)s' AND TABLE_NAME = '%(name)s' """ _COLUMN_QUERY = """ SELECT ORDINAL_POSITION, COLUMN_NAME, COLUMN_TYPE, IS_NULLABLE, COLUMN_DEFAULT, EXTRA, COLUMN_COMMENT, COLUMN_KEY FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = '%(db)s' AND TABLE_NAME = '%(name)s' """ _FK_CONSTRAINT_QUERY = """ SELECT TABLE_NAME, CONSTRAINT_NAME, COLUMN_NAME, REFERENCED_TABLE_SCHEMA, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME, UPDATE_RULE, DELETE_RULE FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE USING (CONSTRAINT_SCHEMA, CONSTRAINT_NAME, TABLE_NAME, REFERENCED_TABLE_NAME) WHERE CONSTRAINT_SCHEMA = '{DATABASE!s}' AND TABLE_NAME = '{TABLE!s}' """ _ALTER_TABLE_ADD_FK_CONSTRAINT = """ ALTER TABLE {DATABASE!s}.{TABLE!s} add CONSTRAINT `{CONSTRAINT_NAME!s}` FOREIGN KEY (`{COLUMN_NAMES}`) REFERENCES `{REFERENCED_DATABASE}`.`{REFERENCED_TABLE!s}` (`{REFERENCED_COLUMNS!s}`) ON UPDATE {UPDATE_RULE} ON DELETE {DELETE_RULE} """ def _multiprocess_tbl_copy_task(copy_tbl_task): """Multiprocess copy table data method. This method wraps the copy of the table's data to allow its concurrent execution by a pool of processes. copy_tbl_task[in] dictionary of values required by a process to perform the table copy task, namely: 'source_srv': , 'dest_srv': , 'source_db': , 'destination_db': , 'table': , 'options': , 'cloning': , 'connections': , 'q_source_db': . """ # Get input to execute task. source_srv = copy_tbl_task.get('source_srv') dest_srv = copy_tbl_task.get('dest_srv') source_db = copy_tbl_task.get('source_db') target_db = copy_tbl_task.get('target_db') table = copy_tbl_task.get('table') options = copy_tbl_task.get('options') cloning = copy_tbl_task.get('cloning') # Execute copy table task. # NOTE: Must handle any exception here, because worker processes will not # propagate them to the main process. try: _copy_table_data(source_srv, dest_srv, source_db, target_db, table, options, cloning) except UtilError: _, err, _ = sys.exc_info() print("ERROR copying data for table '{0}': {1}".format(table, err.errmsg)) def _copy_table_data(source_srv, destination_srv, db_name, new_db_name, tbl_name, tbl_options, cloning, connections=1): """Copy the data of the specified table. This method copies/clones all the data from a table to another (new) database. source_srv[in] Source server (Server instance or dict. with the connection values). destination_srv[in] Destination server (Server instance or dict. with the connection values). db_name[in] Name of the database with the table to copy. new_db_name[in] Name of the destination database to copy the table. tbl_name[in] Name of the table to copy. tbl_options[in] Table options. cloning[in] Cloning flag, in order to use a different method to copy data on the same server connections[in] Specify the use of multiple connections/processes to copy the table data (rows). By default, only 1 used. Note: Multiprocessing option should be preferred. """ # Import table needed here to avoid circular import issues. from mysql.utilities.common.table import Table # Handle source and destination server instances or connection values. # Note: For multiprocessing the use of connection values instead of a # server instance is required to avoid internal errors. if isinstance(source_srv, Server): source = source_srv else: # Get source server instance from connection values. conn_options = { 'quiet': True, # Avoid repeating output for multiprocessing. 'version': "5.1.30", } servers = connect_servers(source_srv, None, conn_options) source = servers[0] if isinstance(destination_srv, Server): destination = destination_srv else: # Get source server instance from connection values. conn_options = { 'quiet': True, # Avoid repeating output for multiprocessing. 'version': "5.1.30", } servers = connect_servers(destination_srv, None, conn_options) destination = servers[0] # Copy table data. if not tbl_options.get("quiet", False): print("# Copying data for TABLE {0}.{1}".format(db_name, tbl_name)) source_sql_mode = source.select_variable("SQL_MODE") q_tbl_name = "{0}.{1}".format(quote_with_backticks(db_name, source_sql_mode), quote_with_backticks(tbl_name, source_sql_mode)) tbl = Table(source, q_tbl_name, tbl_options) if tbl is None: raise UtilDBError("Cannot create table object before copy.", -1, db_name) tbl.copy_data(destination, cloning, new_db_name, connections) class Database(object): """ The Database class encapsulates a database. The class has the following capabilities: - Check to see if the database exists - Drop the database - Create the database - Clone the database - Print CREATE statements for all objects """ obj_type = _DATABASE def __init__(self, source, name, options=None): """Constructor source[in] A Server object name[in] Name of database verbose[in] print extra data during operations (optional) default value = False options[in] Array of options for controlling what is included and how operations perform (e.g., verbose) """ if options is None: options = {} self.source = source # Get the SQL_MODE set on the source self.sql_mode = self.source.select_variable("SQL_MODE") # Keep database identifier considering backtick quotes if is_quoted_with_backticks(name, self.sql_mode): self.q_db_name = name self.db_name = remove_backtick_quoting(self.q_db_name, self.sql_mode) else: self.db_name = name self.q_db_name = quote_with_backticks(self.db_name, self.sql_mode) self.verbose = options.get("verbose", False) self.skip_tables = options.get("skip_tables", False) self.skip_views = options.get("skip_views", False) self.skip_triggers = options.get("skip_triggers", False) self.skip_procs = options.get("skip_procs", False) self.skip_funcs = options.get("skip_funcs", False) self.skip_events = options.get("skip_events", False) self.skip_grants = options.get("skip_grants", False) self.skip_create = options.get("skip_create", False) self.skip_data = options.get("skip_data", False) self.exclude_patterns = options.get("exclude_patterns", None) self.use_regexp = options.get("use_regexp", False) self.skip_table_opts = options.get("skip_table_opts", False) self.new_db = None self.q_new_db = None self.init_called = False self.destination = None # Used for copy mode self.cloning = False # Used for clone mode self.query_options = { # Used for skipping buffered fetch of rows 'fetch': False, 'commit': False, # No COMMIT needed for DDL operations (default). } # Used to store constraints to execute # after table creation, deque is # thread-safe self.constraints = deque() self.objects = [] self.new_objects = [] def exists(self, server=None, db_name=None): """Check to see if the database exists server[in] A Server object (optional) If omitted, operation is performed using the source server connection. db_name[in] database name (optional) If omitted, operation is performed on the class instance table name. return True = database exists, False = database does not exist """ if not server: server = self.source db = None if db_name: db = db_name else: db = self.db_name _QUERY = """ SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = '%s' """ res = server.exec_query(_QUERY % db) return (res is not None and len(res) >= 1) def drop(self, server, quiet, db_name=None): """Drop the database server[in] A Server object quiet[in] ignore error on drop db_name[in] database name (optional) If omitted, operation is performed on the class instance table name. return True = database successfully dropped, False = error """ db = None # Get the SQL_MODE set on the server sql_mode = server.select_variable("SQL_MODE") if db_name: db = db_name if is_quoted_with_backticks(db_name, sql_mode) \ else quote_with_backticks(db_name, sql_mode) else: db = self.q_db_name op_ok = False if quiet: try: server.exec_query("DROP DATABASE %s" % (db), self.query_options) op_ok = True except: pass else: server.exec_query("DROP DATABASE %s" % (db), self.query_options) op_ok = True return op_ok def create(self, server, db_name=None, charset_name=None, collation_name=None): """Create the database server[in] A Server object db_name[in] database name (optional) If omitted, operation is performed on the class instance table name. return True = database successfully created, False = error """ # Get the SQL_MODE set on the server sql_mode = server.select_variable("SQL_MODE") if db_name: db = db_name if is_quoted_with_backticks(db_name, sql_mode) \ else quote_with_backticks(db_name, sql_mode) else: db = self.q_db_name specification = "" if charset_name: specification = " DEFAULT CHARACTER SET {0}".format(charset_name) if collation_name: specification = "{0} DEFAULT COLLATE {0}".format(specification, collation_name) query_create_db = "CREATE DATABASE {0} {1}".format(db, specification) server.exec_query(query_create_db, self.query_options) return True def __make_create_statement(self, obj_type, obj): """Construct a CREATE statement for a database object. This method will get the CREATE statement from the method get_create_statement() and also replace all occurrances of the old database name with the new. obj_type[in] Object type (string) e.g. DATABASE obj[in] A row from the get_db_objects() method that contains the elements of the object Note: This does not work for tables. Returns the CREATE string """ if not self.new_db: self.new_db = self.db_name self.q_new_db = self.q_db_name create_str = None # Tables are not supported if obj_type == _TABLE and self.cloning: return None # Grants are a different animal! if obj_type == _GRANT: if obj[3]: create_str = "GRANT %s ON %s.%s TO %s" % \ (obj[1], self.q_new_db, obj[3], obj[0]) else: create_str = "GRANT %s ON %s.* TO %s" % \ (obj[1], self.q_new_db, obj[0]) else: create_str = self.get_create_statement(self.db_name, obj[0], obj_type) if self.new_db != self.db_name: # Replace the occurrences of the old database name (quoted with # backticks) with the new one when preceded by: a whitespace # character, comma or optionally a left parentheses. create_str = re.sub( r"(\s|,)(\(?){0}\.".format(self.q_db_name), r"\1\2{0}.".format(self.q_new_db), create_str ) # Replace the occurrences of the old database name (without # backticks) with the new one when preceded by: a whitespace # character, comma or optionally a left parentheses and # surrounded by single or double quotes. create_str = re.sub( r"(\s|,)(\(?)(\"|\'?){0}(\"|\'?)\.".format(self.db_name), r"\1\2\3{0}\4.".format(self.new_db), create_str ) return create_str def _get_views_sorted_by_dependencies(self, views, columns, need_backtick=True): """Get a list of views sorted by their dependencies. views[in] List of views objects columns[in] Column mode - names (default), brief, or full need_backtick[in] True if view need backticks in the name Returns the list of view sorted by their dependencies """ if columns == "names": name_idx = 0 elif columns == "full": name_idx = 2 else: name_idx = 1 def _get_dependent_views(view, v_name_dict): """Get a list with all the dependent views for a given view view [in] current view being analyzed v_name_dict [in] mapping from short view names to used view_stm """ # Get view name and use backticks if necessary v_name = view[name_idx] if need_backtick: v_name = quote_with_backticks(v_name, self.sql_mode) # Get view create statement and for each view in views_to_check # see if it is mentioned in the statement stmt = self.get_create_statement(self.db_name, v_name, _VIEW) base_views = [] for v in v_name_dict: # No looking for itself if v != v_name: index = stmt.find(v) if index >= 0: base_views.append(v_name_dict[v]) return base_views def build_view_deps(view_lst): """Get a list of views sorted by their dependencies. view_lst [in] list with views yet to to be ordered Returns the list of view sorted by their dependencies """ # Mapping from view_names to views(brief, name or full) v_name_dict = {} for view in view_lst: key = quote_with_backticks(view[name_idx], self.sql_mode) if \ need_backtick else view[name_idx] v_name_dict[key] = view # Initialize sorted_tpl sorted_views = [] # set with view whose dependencies were/are being analyzed.key visited_views = set() # set with views that have already been processed # (subset of processed_views). Contains the same elements as # sorted_views. processed_views = set() # Init stack view_stack = view_lst[:] while view_stack: curr_view = view_stack[-1] # look at top of the stack if curr_view in visited_views: view_stack.pop() if curr_view not in processed_views: sorted_views.append(curr_view) processed_views.add(curr_view) else: visited_views.add(curr_view) children_views = _get_dependent_views(curr_view, v_name_dict) if children_views: for child in children_views: # store not yet processed base views the temp stack if child not in processed_views: view_stack.append(child) # No more views on the stack, return list of sorted views return sorted_views # Returns without columns names if isinstance(views[0], tuple): return build_view_deps(views) # Returns the tuple reconstructed with views sorted return (views[0], build_view_deps(views[1]),) def __add_db_objects(self, obj_type): """Get a list of objects from a database based on type. This method retrieves the list of objects for a specific object type and adds it to the class' master object list. obj_type[in] Object type (string) e.g. DATABASE """ rows = self.get_db_objects(obj_type) if rows: for row in rows: tup = (obj_type, row) self.objects.append(tup) def init(self): """Get all objects for the database based on options set. This method initializes the database object with a list of all objects except those object types that are excluded. It calls the helper method self.__add_db_objects() for each type of object. NOTE: This method must be called before the copy method. A guard is in place to ensure this. """ self.init_called = True # Get tables if not self.skip_tables: self.__add_db_objects(_TABLE) # Get functions if not self.skip_funcs: self.__add_db_objects(_FUNC) # Get stored procedures if not self.skip_procs: self.__add_db_objects(_PROC) # Get views if not self.skip_views: self.__add_db_objects(_VIEW) # Get triggers if not self.skip_triggers: self.__add_db_objects(_TRIG) # Get events if not self.skip_events: self.__add_db_objects(_EVENT) # Get grants if not self.skip_grants: self.__add_db_objects(_GRANT) def __drop_object(self, obj_type, name): """Drop a database object. Attempts a quiet drop of a database object (no errors are printed). obj_type[in] Object type (string) e.g. DATABASE name[in] Name of the object """ if self.verbose: print "# Dropping new object %s %s.%s" % \ (obj_type, self.new_db, name) drop_str = "DROP %s %s.%s" % \ (obj_type, self.q_new_db, name) # Suppress the error on drop if self.cloning: try: self.source.exec_query(drop_str, self.query_options) except UtilError: if self.verbose: print("# WARNING: Unable to drop {0} from {1} database " "(object may not exist): {2}".format(name, "source", drop_str)) else: try: self.destination.exec_query(drop_str, self.query_options) except UtilError: if self.verbose: print("# WARNING: Unable to drop {0} from {1} database " "(object may not exist): {2}".format(name, "destination", drop_str)) def __create_object(self, obj_type, obj, show_grant_msg, quiet=True, new_engine=None, def_engine=None): """Create a database object. obj_type[in] Object type (string) e.g. DATABASE obj[in] A row from the get_db_object_names() method that contains the elements of the object show_grant_msg[in] If true, display diagnostic information quiet[in] do not print informational messages new_engine[in] Use this engine if not None for object def_engine[in] If target storage engine doesn't exist, use this engine. Note: will handle exception and print error if query fails """ # Use the sql_mode set on destination server dest_sql_mode = self.destination.select_variable("SQL_MODE") q_new_db = quote_with_backticks(self.new_db, dest_sql_mode) q_db_name = quote_with_backticks(self.db_name, dest_sql_mode) if obj_type == _TABLE and self.cloning: obj_name = quote_with_backticks(obj[0], dest_sql_mode) create_list = ["CREATE TABLE {0!s}.{1!s} LIKE {2!s}.{1!s}".format( q_new_db, obj_name, q_db_name) ] else: create_list = [self.__make_create_statement(obj_type, obj)] if obj_type == _TABLE: may_skip_fk = False # Check possible issues with FK Constraints obj_name = quote_with_backticks(obj[0], dest_sql_mode) tbl_name = "%s.%s" % (self.q_new_db, obj_name) create_list = self.destination.substitute_engine(tbl_name, create_list[0], new_engine, def_engine, quiet) # Get storage engines from the source table and destination table # If the source table's engine is INNODB and the destination is # not we will loose any FK constraints that may exist src_eng = self.get_object_definition(self.q_db_name, obj[0], obj_type)[0][0][2] dest_eng = None # Information about the engine is always in the last statement of # the list, be it a regular create table statement or a create # table; alter table statement. i = create_list[-1].find("ENGINE=") if i > 0: j = create_list[-1].find(" ", i) dest_eng = create_list[-1][i + 7:j] dest_eng = dest_eng or src_eng if src_eng.upper() == 'INNODB' and dest_eng.upper() != 'INNODB': may_skip_fk = True string = "# Copying" if not quiet: if obj_type == _GRANT: if show_grant_msg: print "%s GRANTS from %s" % (string, self.db_name) else: print "%s %s %s.%s" % \ (string, obj_type, self.db_name, obj[0]) if self.verbose: print("; ".join(create_list)) try: self.destination.exec_query("USE %s" % self.q_new_db, self.query_options) except: pass for stm in create_list: try: if obj_type == _GRANT: user = User(self.destination, obj[0]) if not user.exists(): user.create() self.destination.exec_query(stm, self.query_options) except Exception as e: raise UtilDBError("Cannot operate on {0} object." " Error: {1}".format(obj_type, e.errmsg), -1, self.db_name) # Look for foreign key constraints if obj_type == _TABLE: params = { 'DATABASE': self.db_name, 'TABLE': obj[0], } try: query = _FK_CONSTRAINT_QUERY.format(**params) fkey_constr = self.source.exec_query(query) except Exception as e: raise UtilDBError("Unable to obtain Foreign Key constraint " "information for table {0}.{1}. " "Error: {2}".format(self.db_name, obj[0], e.errmsg), -1, self.db_name) # Get information about the foreign keys of the table being # copied/cloned. if fkey_constr and not may_skip_fk: # Create a constraint dictionary with the constraint # name as key constr_dict = {} # This list is used to ensure the same constraints are applied # in the same order, because iterating the dictionary doesn't # offer any guarantees regarding order, and Python 2.6 has # no ordered_dict constr_lst = [] for fkey in fkey_constr: params = constr_dict.get(fkey[1]) # in case the constraint entry already exists, it means it # is composite, just update the columns names and # referenced column fields if params: params['COLUMN_NAMES'].append(fkey[2]) params['REFERENCED_COLUMNS'].append(fkey[5]) else: # else create a new entry constr_lst.append(fkey[1]) constr_dict[fkey[1]] = { 'DATABASE': self.new_db, 'TABLE': fkey[0], 'CONSTRAINT_NAME': fkey[1], 'COLUMN_NAMES': [fkey[2]], 'REFERENCED_DATABASE': fkey[3], 'REFERENCED_TABLE': fkey[4], 'REFERENCED_COLUMNS': [fkey[5]], 'UPDATE_RULE': fkey[6], 'DELETE_RULE': fkey[7], } # Iterate all the constraints and get the necessary parameters # to create the query for constr in constr_lst: params = constr_dict[constr] if self.cloning: # if it is a cloning table operation # In case the foreign key is composite we need to join # the columns to use in in alter table query. Only # useful when cloning params['COLUMN_NAMES'] = '`,`'.join( params['COLUMN_NAMES']) params['REFERENCED_COLUMNS'] = '`,`'.join( params['REFERENCED_COLUMNS']) # If the foreign key points to a table under the # database being cloned, change the referenced database # name to the new cloned database if params['REFERENCED_DATABASE'] == self.db_name: params['REFERENCED_DATABASE'] = self.new_db else: print("# WARNING: The database being cloned has " "external Foreign Key constraint " "dependencies, {0}.{1} depends on {2}." "{3}".format(params['DATABASE'], params['TABLE'], params['REFERENCED_DATABASE'], params['REFERENCED_TABLE']) ) query = _ALTER_TABLE_ADD_FK_CONSTRAINT.format(**params) # Store constraint query for later execution self.constraints.append(query) if self.verbose: print(query) else: # if we are copying if params['REFERENCED_DATABASE'] != self.db_name: # if the table being copied has dependencies # to external databases print("# WARNING: The database being copied has " "external Foreign Key constraint " "dependencies, {0}.{1} depends on {2}." "{3}".format(params['DATABASE'], params['TABLE'], params['REFERENCED_DATABASE'], params['REFERENCED_TABLE']) ) elif fkey_constr and may_skip_fk: print("# WARNING: FOREIGN KEY constraints for table {0}.{1} " "are missing because the new storage engine for " "the table is not InnoDB".format(self.new_db, obj[0])) def __apply_constraints(self): """This method applies to the database the constraints stored in the self.constraints instance variable """ # Enable Foreign Key Checks to prevent the swapping of # RESTRICT referential actions with NO ACTION query_opts = {'fetch': False, 'commit': False} self.destination.exec_query("SET FOREIGN_KEY_CHECKS=1", query_opts) # while constraint queue is not empty while self.constraints: try: query = self.constraints.pop() except IndexError: # queue is empty, exit while statement break if self.verbose: print(query) try: self.destination.exec_query(query, query_opts) except Exception as err: raise UtilDBError("Unable to execute constraint query " "{0}. Error: {1}".format(query, err.errmsg), -1, self.new_db) # Turn Foreign Key Checks off again self.destination.exec_query("SET FOREIGN_KEY_CHECKS=0", query_opts) def copy_objects(self, new_db, options, new_server=None, connections=1, check_exists=True): """Copy the database objects. This method will copy a database and all of its objects and data to another, new database. Options set at instantiation will determine if there are objects that are excluded from the copy. Likewise, the method will also skip data if that option was set and process an input file with INSERT statements if that option was set. The method can also be used to copy a database to another server by providing the new server object (new_server). Copy to the same name by setting new_db = old_db or as a new database. new_db[in] Name of the new database options[in] Options for copy e.g. do_drop, etc. new_server[in] Connection to another server for copying the db Default is None (copy to same server - clone) connections[in] Number of threads(connections) to use for insert check_exists[in] If True, check for database existence before copy Default is True """ # Must call init() first! # Guard for init() prerequisite assert self.init_called, "You must call db.init() before " + \ "db.copy_objects()." grant_msg_displayed = False # Get sql_mode in new_server sql_mode = new_server.select_variable("SQL_MODE") if new_db: # Assign new database identifier considering backtick quotes. if is_quoted_with_backticks(new_db, sql_mode): self.q_new_db = new_db self.new_db = remove_backtick_quoting(new_db, sql_mode) else: self.new_db = new_db self.q_new_db = quote_with_backticks(new_db, sql_mode) else: # If new_db is not defined use the same as source database. self.new_db = self.db_name self.q_new_db = self.q_db_name self.destination = new_server # We know we're cloning if there is no new connection. self.cloning = (new_server == self.source) if self.cloning: self.destination = self.source # Check to see if database exists if check_exists: if self.cloning: exists = self.exists(self.source, new_db) drop_server = self.source else: exists = self.exists(self.destination, new_db) drop_server = self.destination if exists: if options.get("do_drop", False): self.drop(drop_server, True, new_db) elif not self.skip_create: raise UtilDBError("destination database exists. Use " "--drop-first to overwrite existing " "database.", -1, new_db) db_name = self.db_name definition = self.get_object_definition(db_name, db_name, _DATABASE) _, character_set, collation, _ = definition[0] # Create new database first if not self.skip_create: if self.cloning: self.create(self.source, new_db, character_set, collation) else: self.create(self.destination, new_db, character_set, collation) # Get sql_mode set on destination server dest_sql_mode = self.destination.select_variable("SQL_MODE") # Create the objects in the new database for obj in self.objects: # Drop object if --drop-first specified and database not dropped # Grants do not need to be dropped for overwriting if options.get("do_drop", False) and obj[0] != _GRANT: obj_name = quote_with_backticks(obj[1][0], dest_sql_mode) self.__drop_object(obj[0], obj_name) # Create the object self.__create_object(obj[0], obj[1], not grant_msg_displayed, options.get("quiet", False), options.get("new_engine", None), options.get("def_engine", None)) if obj[0] == _GRANT and not grant_msg_displayed: grant_msg_displayed = True # After object creation, add the constraints if self.constraints: self.__apply_constraints() def copy_data(self, new_db, options, new_server=None, connections=1, src_con_val=None, dest_con_val=None): """Copy the data for the tables. This method will copy the data for all of the tables to another, new database. The method will process an input file with INSERT statements if the option was selected by the caller. new_db[in] Name of the new database options[in] Options for copy e.g. do_drop, etc. new_server[in] Connection to another server for copying the db Default is None (copy to same server - clone) connections[in] Number of threads(connections) to use for insert src_con_val[in] Dict. with the connection values of the source server (required for multiprocessing). dest_con_val[in] Dict. with the connection values of the destination server (required for multiprocessing). """ # Must call init() first! # Guard for init() prerequisite assert self.init_called, "You must call db.init() before " + \ "db.copy_data()." if self.skip_data: return self.destination = new_server # We know we're cloning if there is no new connection. self.cloning = (new_server == self.source) if self.cloning: self.destination = self.source quiet = options.get("quiet", False) tbl_options = { 'verbose': self.verbose, 'get_cols': True, 'quiet': quiet } copy_tbl_tasks = [] table_names = [obj[0] for obj in self.get_db_objects(_TABLE)] for tblname in table_names: # Check multiprocess table copy (only on POSIX systems). if options['multiprocess'] > 1 and os.name == 'posix': # Create copy task. copy_task = { 'source_srv': src_con_val, 'dest_srv': dest_con_val, 'source_db': self.db_name, 'target_db': new_db, 'table': tblname, 'options': tbl_options, 'cloning': self.cloning, } copy_tbl_tasks.append(copy_task) else: # Copy data from a table (no multiprocessing). _copy_table_data(self.source, self.destination, self.db_name, new_db, tblname, tbl_options, self.cloning) # Copy tables concurrently. if copy_tbl_tasks: # Create process pool. workers_pool = multiprocessing.Pool( processes=options['multiprocess'] ) # Concurrently export tables. workers_pool.map_async(_multiprocess_tbl_copy_task, copy_tbl_tasks) workers_pool.close() # Wait for all task to be completed by workers. workers_pool.join() def get_create_statement(self, db, name, obj_type): """Return the create statement for the object db[in] Database name name[in] Name of the object obj_type[in] Object type (string) e.g. DATABASE Note: this is used to form the correct SHOW command Returns create statement """ # Save current sql_mode and switch it to '' momentarily as this # prevents issues when copying blobs and destination server is # set with SQL_MODE='NO_BACKSLASH_ESCAPES' prev_sql_mode = '' if (self.destination is not None and 'ANSI_QUOTES' in self.sql_mode and 'ANSI_QUOTES' not in self.destination.select_variable("SQL_MODE")): prev_sql_mode = self.source.select_variable("SQL_MODE") self.source.exec_query("SET @@SESSION.SQL_MODE=''") self.sql_mode = "" # Quote with current sql_mode name = (name if not is_quoted_with_backticks(name, prev_sql_mode) else remove_backtick_quoting(name, prev_sql_mode)) db = (db if not is_quoted_with_backticks(db, prev_sql_mode) else remove_backtick_quoting(db, prev_sql_mode)) # Quote database and object name with backticks. q_name = (name if is_quoted_with_backticks(name, self.sql_mode) else quote_with_backticks(name, self.sql_mode)) if obj_type == _DATABASE: name_str = q_name else: q_db = (db if is_quoted_with_backticks(db, self.sql_mode) else quote_with_backticks(db, self.sql_mode)) # Switch the default database to execute the # SHOW CREATE statement without needing to specify the database # This is for 5.1 compatibility reasons: try: self.source.exec_query("USE {0}".format(q_db), self.query_options) except UtilError as err: raise UtilDBError("ERROR: Couldn't change " "default database: {0}".format(err.errmsg)) name_str = q_name # Retrieve the CREATE statement. row = self.source.exec_query( "SHOW CREATE {0} {1}".format(obj_type, name_str) ) # Restore previews sql_mode if prev_sql_mode: self.source.exec_query("SET @@SESSION.SQL_MODE={0}" "".format(prev_sql_mode)) self.sql_mode = prev_sql_mode create_statement = None if row: if obj_type == _TABLE or obj_type == _VIEW or \ obj_type == _DATABASE: create_statement = row[0][1] elif obj_type == _EVENT: create_statement = row[0][3] else: create_statement = row[0][2] # Remove all table options from the CREATE statement (if requested). if self.skip_table_opts and obj_type == _TABLE: # First, get partition options. create_tbl, sep, part_opts = create_statement.rpartition('\n/*') # Handle situation where no partition options are found. if not create_tbl: create_tbl = part_opts part_opts = '' else: part_opts = "{0}{1}".format(sep, part_opts) # Then, separate table definitions from table options. create_tbl, sep, _ = create_tbl.rpartition(') ') # Reconstruct CREATE statement without table options. create_statement = "{0}{1}{2}".format(create_tbl, sep, part_opts) return create_statement def get_create_table(self, db, table): """Return the create table statement for the given table. This method returns the CREATE TABLE statement for the given table with or without the table options, according to the Database object property 'skip_table_opts'. db[in] Database name. table[in] Table name. Returns a tuple with the CREATE TABLE statement and table options (or None). If skip_table_opts=True the CREATE statement does not include the table options that are returned separately, otherwise the table options are included in the CREATE statement and None is returned as the second tuple element. """ # Quote database and table name with backticks. q_table = (table if is_quoted_with_backticks(table, self.sql_mode) else quote_with_backticks(table, self.sql_mode)) q_db = db if is_quoted_with_backticks(db, self.sql_mode) else \ quote_with_backticks(db, self.sql_mode) # Retrieve CREATE TABLE. try: row = self.source.exec_query( "SHOW CREATE TABLE {0}.{1}".format(q_db, q_table) ) create_tbl = row[0][1] except UtilError as err: raise UtilDBError("Error retrieving CREATE TABLE for {0}.{1}: " "{2}".format(q_db, q_table, err.errmsg)) # Separate table options from table definition. tbl_opts = None if self.skip_table_opts: # First, get partition options. create_tbl, sep, part_opts = create_tbl.rpartition('\n/*') # Handle situation where no partition options are found. if not create_tbl: create_tbl = part_opts part_opts = '' else: part_opts = "{0}{1}".format(sep, part_opts) # Then, separate table definitions from table options. create_tbl, sep, tbl_opts = create_tbl.rpartition(') ') # Reconstruct CREATE TABLE without table options. create_tbl = "{0}{1}{2}".format(create_tbl, sep, part_opts) return create_tbl, tbl_opts def get_table_options(self, db, table): """Return the table options. This method returns the list of used table options (from the CREATE TABLE statement). db[in] Database name. table[in] Table name. Returns a list of table options. For example: ['AUTO_INCREMENT=5','ENGINE=InnoDB'] """ # Quote database and table name with backticks. q_table = (table if is_quoted_with_backticks(table, self.sql_mode) else quote_with_backticks(table, self.sql_mode)) q_db = db if is_quoted_with_backticks(db, self.sql_mode) else \ quote_with_backticks(db, self.sql_mode) # Retrieve CREATE TABLE statement. try: row = self.source.exec_query( "SHOW CREATE TABLE {0}.{1}".format(q_db, q_table) ) create_tbl = row[0][1] except UtilError as err: raise UtilDBError("Error retrieving CREATE TABLE for {0}.{1}: " "{2}".format(q_db, q_table, err.errmsg)) # First, separate partition options. create_tbl, _, part_opts = create_tbl.rpartition('\n/*') # Handle situation where no partition options are found. create_tbl = part_opts if not create_tbl else create_tbl # Then, separate table options from table definition. create_tbl, _, tbl_opts = create_tbl.rpartition(') ') table_options = tbl_opts.split() return table_options def get_object_definition(self, db, name, obj_type): """Return a list of the object's creation metadata. This method queries the INFORMATION_SCHEMA or MYSQL database for the row-based (list) description of the object. This is similar to the output EXPLAIN . db[in] Database name name[in] Name of the object obj_type[in] Object type (string) e.g. DATABASE Note: this is used to form the correct SHOW command Returns list - object definition, None if db.object does not exist """ definition = [] from_name = None condition = None # Remove objects backticks if needed db = remove_backtick_quoting(db, self.sql_mode) \ if is_quoted_with_backticks(db, self.sql_mode) else db name = remove_backtick_quoting(name, self.sql_mode) \ if is_quoted_with_backticks(name, self.sql_mode) else name if obj_type == _DATABASE: columns = 'SCHEMA_NAME, DEFAULT_CHARACTER_SET_NAME, ' + \ 'DEFAULT_COLLATION_NAME, SQL_PATH' from_name = 'SCHEMATA' condition = "SCHEMA_NAME = '%s'" % name elif obj_type == _TABLE: columns = 'TABLE_SCHEMA, TABLE_NAME, ENGINE, AUTO_INCREMENT, ' + \ 'AVG_ROW_LENGTH, CHECKSUM, TABLE_COLLATION, ' + \ 'TABLE_COMMENT, ROW_FORMAT, CREATE_OPTIONS' from_name = 'TABLES' condition = "TABLE_SCHEMA = '%s' AND TABLE_NAME = '%s'" % \ (db, name) elif obj_type == _VIEW: columns = 'TABLE_SCHEMA, TABLE_NAME, VIEW_DEFINITION, ' + \ 'CHECK_OPTION, DEFINER, SECURITY_TYPE' from_name = 'VIEWS' condition = "TABLE_SCHEMA = '%s' AND TABLE_NAME = '%s'" % \ (db, name) elif obj_type == _TRIG: columns = 'TRIGGER_SCHEMA, TRIGGER_NAME, EVENT_MANIPULATION, ' + \ 'EVENT_OBJECT_TABLE, ACTION_STATEMENT, ' + \ 'ACTION_TIMING, DEFINER' from_name = 'TRIGGERS' condition = "TRIGGER_SCHEMA = '%s' AND TRIGGER_NAME = '%s'" % \ (db, name) elif obj_type == _PROC or obj_type == _FUNC: columns = 'ROUTINE_SCHEMA, ROUTINE_NAME, ROUTINE_DEFINITION, ' + \ 'ROUTINES.SQL_DATA_ACCESS, ROUTINES.SECURITY_TYPE, ' + \ 'ROUTINE_COMMENT, ROUTINES.DEFINER, param_list, ' + \ 'DTD_IDENTIFIER, ROUTINES.IS_DETERMINISTIC' from_name = 'ROUTINES JOIN mysql.proc ON ' + \ 'ROUTINES.ROUTINE_SCHEMA = proc.db AND ' + \ 'ROUTINES.ROUTINE_NAME = proc.name AND ' + \ 'ROUTINES.ROUTINE_TYPE = proc.type ' condition = "ROUTINE_SCHEMA = '%s' AND ROUTINE_NAME = '%s'" % \ (db, name) if obj_type == _PROC: typ = 'PROCEDURE' else: typ = 'FUNCTION' condition += " AND ROUTINE_TYPE = '%s'" % typ elif obj_type == _EVENT: columns = ('EVENT_SCHEMA, EVENT_NAME, DEFINER, EVENT_DEFINITION, ' 'EVENT_TYPE, INTERVAL_FIELD, INTERVAL_VALUE, STATUS, ' 'ON_COMPLETION, STARTS, ENDS') from_name = 'EVENTS' condition = "EVENT_SCHEMA = '%s' AND EVENT_NAME = '%s'" % \ (db, name) if from_name is None: raise UtilError('Attempting to get definition from unknown object ' 'type = %s.' % obj_type) values = { 'columns': columns, 'table_name': from_name, 'conditions': condition, } rows = self.source.exec_query(_DEFINITION_QUERY % values) if rows != []: # If this is a table, we need three types of information: # basic info, column info, and partitions info if obj_type == _TABLE: values['name'] = name values['db'] = db basic_def = rows[0] col_def = self.source.exec_query(_COLUMN_QUERY % values) part_def = self.source.exec_query(_PARTITION_QUERY % values) definition.append((basic_def, col_def, part_def)) else: definition.append(rows[0]) return definition def get_next_object(self): """Retrieve the next object in the database list. This method is an iterator for retrieving the objects in the database as specified in the init() method. You must call this method first. Returns next object in list or throws exception at EOL. """ # Must call init() first! # Guard for init() prerequisite assert self.init_called, "You must call db.init() before db.copy()." for obj in self.objects: yield obj def __build_exclude_patterns(self, exclude_param): """Return a string to add to where clause to exclude objects. This method will add the conditions to exclude objects based on name if there is a dot notation or by a search pattern as specified by the options. exclude_param[in] Name of column to check. Returns (string) String to add to where clause or "" """ oper = 'NOT REGEXP' if self.use_regexp else 'NOT LIKE' string = "" for pattern in self.exclude_patterns: # Check use of qualified object names (with backtick support). if pattern.find(".") > 0: use_backtick = is_quoted_with_backticks(pattern, self.sql_mode) db, name = parse_object_name(pattern, self.sql_mode, True) if use_backtick: # Remove backtick quotes. db = remove_backtick_quoting(db, self.sql_mode) name = remove_backtick_quoting(name, self.sql_mode) if db == self.db_name: # Check if database name matches. value = name # Only use the object name to exclude. else: value = pattern # Otherwise directly use the specified pattern. else: value = pattern if value: # Append exclude condition to previous one(s). string = "{0} AND {1} {2} {3}".format(string, exclude_param, oper, obj2sql(value)) return string def get_object_type(self, object_name): """Return the object type of an object This method attempts to locate the object name among the objects in the database. It returns the object type if found or None if not found. Note: different types of objects with the same name might exist in the database. object_name[in] Name of the object to find Returns (list of strings) with the object types or None if not found """ object_types = None # Remove object backticks if needed obj_name = remove_backtick_quoting(object_name, self.sql_mode) \ if is_quoted_with_backticks(object_name, self.sql_mode) else \ object_name res = self.source.exec_query(_OBJTYPE_QUERY % {'db_name': self.db_name, 'obj_name': obj_name}) if res: object_types = ['TABLE' if row[0] == 'BASE TABLE' else row[0] for row in res] return object_types def get_db_objects(self, obj_type, columns='names', get_columns=False, need_backtick=False): """Return a result set containing a list of objects for a given database based on type. This method returns either a list of names for the object type specified, a brief list of minimal columns for creating the objects, or the full list of columns from INFORMATION_SCHEMA. It can also provide the list of column names if desired. obj_type[in] Type of object to retrieve columns[in] Column mode - names (default), brief, or full Note: not valid for GRANT objects. get_columns[in] If True, return column names as first element and result set as second element. If False, return only the result set. need_backtick[in] If True, it returns any identifiers, e.g. table and column names, quoted with backticks. By default, False. TODO: Change implementation to return classes instead of a result set. Returns mysql.connector result set """ exclude_param = "" if obj_type == _TABLE: _NAMES = """ SELECT DISTINCT TABLES.TABLE_NAME """ names_pos_to_quote = (0,) _FULL = """ SELECT TABLES.TABLE_CATALOG, TABLES.TABLE_SCHEMA, TABLES.TABLE_NAME, TABLES.TABLE_TYPE, TABLES.ENGINE, TABLES.VERSION, TABLES.ROW_FORMAT, TABLES.TABLE_ROWS, TABLES.AVG_ROW_LENGTH, TABLES.DATA_LENGTH, TABLES.MAX_DATA_LENGTH, TABLES.INDEX_LENGTH, TABLES.DATA_FREE, TABLES.AUTO_INCREMENT, TABLES.CREATE_TIME, TABLES.UPDATE_TIME, TABLES.CHECK_TIME, TABLES.TABLE_COLLATION, TABLES.CHECKSUM, TABLES.CREATE_OPTIONS, TABLES.TABLE_COMMENT, COLUMNS.ORDINAL_POSITION, COLUMNS.COLUMN_NAME, COLUMNS.COLUMN_TYPE, COLUMNS.IS_NULLABLE, COLUMNS.COLUMN_DEFAULT, COLUMNS.COLUMN_KEY, REFERENTIAL_CONSTRAINTS.CONSTRAINT_NAME, REFERENTIAL_CONSTRAINTS.REFERENCED_TABLE_NAME, REFERENTIAL_CONSTRAINTS.UNIQUE_CONSTRAINT_NAME, REFERENTIAL_CONSTRAINTS.UNIQUE_CONSTRAINT_SCHEMA, REFERENTIAL_CONSTRAINTS.UPDATE_RULE, REFERENTIAL_CONSTRAINTS.DELETE_RULE, KEY_COLUMN_USAGE.CONSTRAINT_NAME AS KEY_CONSTRAINT_NAME, KEY_COLUMN_USAGE.COLUMN_NAME AS COL_NAME, KEY_COLUMN_USAGE.REFERENCED_TABLE_SCHEMA, KEY_COLUMN_USAGE.REFERENCED_COLUMN_NAME """ full_pos_to_quote = (1, 2, 22, 27, 28, 29, 30, 33, 34, 35, 36) full_pos_split_quote = (34, 36) _MINIMAL = """ SELECT TABLES.TABLE_SCHEMA, TABLES.TABLE_NAME, TABLES.ENGINE, COLUMNS.ORDINAL_POSITION, COLUMNS.COLUMN_NAME, COLUMNS.COLUMN_TYPE, COLUMNS.IS_NULLABLE, COLUMNS.COLUMN_DEFAULT, COLUMNS.COLUMN_KEY, TABLES.TABLE_COLLATION, TABLES.CREATE_OPTIONS, REFERENTIAL_CONSTRAINTS.CONSTRAINT_NAME, REFERENTIAL_CONSTRAINTS.REFERENCED_TABLE_NAME, REFERENTIAL_CONSTRAINTS.UNIQUE_CONSTRAINT_NAME, REFERENTIAL_CONSTRAINTS.UPDATE_RULE, REFERENTIAL_CONSTRAINTS.DELETE_RULE, KEY_COLUMN_USAGE.CONSTRAINT_NAME AS KEY_CONSTRAINT_NAME, KEY_COLUMN_USAGE.COLUMN_NAME AS COL_NAME, KEY_COLUMN_USAGE.REFERENCED_TABLE_SCHEMA, KEY_COLUMN_USAGE.REFERENCED_COLUMN_NAME """ minimal_pos_to_quote = (0, 1, 4, 11, 12, 13, 16, 17, 18, 19) minimal_pos_split_quote = (17, 19) _OBJECT_QUERY = """ FROM INFORMATION_SCHEMA.TABLES JOIN INFORMATION_SCHEMA.COLUMNS ON TABLES.TABLE_SCHEMA = COLUMNS.TABLE_SCHEMA AND TABLES.TABLE_NAME = COLUMNS.TABLE_NAME LEFT JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS ON TABLES.TABLE_SCHEMA = REFERENTIAL_CONSTRAINTS.CONSTRAINT_SCHEMA AND TABLES.TABLE_NAME = REFERENTIAL_CONSTRAINTS.TABLE_NAME LEFT JOIN ( SELECT CONSTRAINT_SCHEMA, TABLE_NAME, CONSTRAINT_NAME, GROUP_CONCAT(COLUMN_NAME ORDER BY ORDINAL_POSITION) AS COLUMN_NAME, REFERENCED_TABLE_SCHEMA, GROUP_CONCAT(REFERENCED_COLUMN_NAME ORDER BY ORDINAL_POSITION) AS REFERENCED_COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE GROUP BY CONSTRAINT_SCHEMA, TABLE_NAME, CONSTRAINT_NAME, REFERENCED_TABLE_SCHEMA ) AS KEY_COLUMN_USAGE ON TABLES.TABLE_SCHEMA = KEY_COLUMN_USAGE.CONSTRAINT_SCHEMA AND TABLES.TABLE_NAME = KEY_COLUMN_USAGE.TABLE_NAME WHERE TABLES.TABLE_SCHEMA = '%s' AND TABLE_TYPE <> 'VIEW' %s """ _ORDER_BY_DEFAULT = """ ORDER BY TABLES.TABLE_SCHEMA, TABLES.TABLE_NAME, COLUMNS.ORDINAL_POSITION """ _ORDER_BY_NAME = """ ORDER BY TABLES.TABLE_NAME """ exclude_param = "TABLES.TABLE_NAME" elif obj_type == _VIEW: _NAMES = """ SELECT TABLE_NAME """ names_pos_to_quote = (0,) _FULL = """ SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, VIEW_DEFINITION, CHECK_OPTION, IS_UPDATABLE, DEFINER, SECURITY_TYPE, CHARACTER_SET_CLIENT, COLLATION_CONNECTION """ full_pos_to_quote = (1, 2) full_pos_split_quote = () _MINIMAL = """ SELECT TABLE_SCHEMA, TABLE_NAME, DEFINER, SECURITY_TYPE, VIEW_DEFINITION, CHECK_OPTION, IS_UPDATABLE, CHARACTER_SET_CLIENT, COLLATION_CONNECTION """ minimal_pos_to_quote = (0, 1) minimal_pos_split_quote = () _OBJECT_QUERY = """ FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_SCHEMA = '%s' %s """ _ORDER_BY_DEFAULT = "" _ORDER_BY_NAME = "" exclude_param = "VIEWS.TABLE_NAME" elif obj_type == _TRIG: _NAMES = """ SELECT TRIGGER_NAME """ names_pos_to_quote = (0,) _FULL = """ SELECT TRIGGER_CATALOG, TRIGGER_SCHEMA, TRIGGER_NAME, EVENT_MANIPULATION, EVENT_OBJECT_CATALOG, EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_ORDER, ACTION_CONDITION, ACTION_STATEMENT, ACTION_ORIENTATION, ACTION_TIMING, ACTION_REFERENCE_OLD_TABLE, ACTION_REFERENCE_NEW_TABLE, ACTION_REFERENCE_OLD_ROW, ACTION_REFERENCE_NEW_ROW, CREATED, SQL_MODE, DEFINER, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DATABASE_COLLATION """ full_pos_to_quote = (1, 2, 5, 6) # 9 ? full_pos_split_quote = () _MINIMAL = """ SELECT TRIGGER_NAME, DEFINER, EVENT_MANIPULATION, EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_ORIENTATION, ACTION_TIMING, ACTION_STATEMENT, SQL_MODE, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DATABASE_COLLATION """ # Note: 7 (ACTION_STATEMENT) might require special handling minimal_pos_to_quote = (0, 3, 4) minimal_pos_split_quote = () _OBJECT_QUERY = """ FROM INFORMATION_SCHEMA.TRIGGERS WHERE TRIGGER_SCHEMA = '%s' %s """ _ORDER_BY_DEFAULT = "" _ORDER_BY_NAME = "" exclude_param = "TRIGGERS.TRIGGER_NAME" elif obj_type == _PROC: _NAMES = """ SELECT NAME """ names_pos_to_quote = (0,) _FULL = """ SELECT DB, NAME, TYPE, SPECIFIC_NAME, LANGUAGE, SQL_DATA_ACCESS, IS_DETERMINISTIC, SECURITY_TYPE, PARAM_LIST, RETURNS, BODY, DEFINER, CREATED, MODIFIED, SQL_MODE, COMMENT, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DB_COLLATION, BODY_UTF8 """ full_pos_to_quote = (0, 1, 3) full_pos_split_quote = () _MINIMAL = """ SELECT NAME, LANGUAGE, SQL_DATA_ACCESS, IS_DETERMINISTIC, SECURITY_TYPE, DEFINER, PARAM_LIST, RETURNS, BODY, SQL_MODE, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DB_COLLATION """ minimal_pos_to_quote = (0,) minimal_pos_split_quote = () _OBJECT_QUERY = """ FROM mysql.proc WHERE DB = '%s' AND TYPE = 'PROCEDURE' %s """ _ORDER_BY_DEFAULT = "" _ORDER_BY_NAME = "" exclude_param = "NAME" elif obj_type == _FUNC: _NAMES = """ SELECT NAME """ names_pos_to_quote = (0,) _FULL = """ SELECT DB, NAME, TYPE, SPECIFIC_NAME, LANGUAGE, SQL_DATA_ACCESS, IS_DETERMINISTIC, SECURITY_TYPE, PARAM_LIST, RETURNS, BODY, DEFINER, CREATED, MODIFIED, SQL_MODE, COMMENT, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DB_COLLATION, BODY_UTF8 """ full_pos_to_quote = (0, 1, 3) full_pos_split_quote = () _MINIMAL = """ SELECT NAME, LANGUAGE, SQL_DATA_ACCESS, IS_DETERMINISTIC, SECURITY_TYPE, DEFINER, PARAM_LIST, RETURNS, BODY, SQL_MODE, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DB_COLLATION """ minimal_pos_to_quote = (0,) minimal_pos_split_quote = () _OBJECT_QUERY = """ FROM mysql.proc WHERE DB = '%s' AND TYPE = 'FUNCTION' %s """ _ORDER_BY_DEFAULT = "" _ORDER_BY_NAME = "" exclude_param = "NAME" elif obj_type == _EVENT: _NAMES = """ SELECT NAME """ names_pos_to_quote = (0,) _FULL = """ SELECT DB, NAME, BODY, DEFINER, EXECUTE_AT, INTERVAL_VALUE, INTERVAL_FIELD, CREATED, MODIFIED, LAST_EXECUTED, STARTS, ENDS, STATUS, ON_COMPLETION, SQL_MODE, COMMENT, ORIGINATOR, TIME_ZONE, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DB_COLLATION, BODY_UTF8 """ full_pos_to_quote = (0, 1) full_pos_split_quote = () _MINIMAL = """ SELECT NAME, DEFINER, BODY, STATUS, EXECUTE_AT, INTERVAL_VALUE, INTERVAL_FIELD, SQL_MODE, STARTS, ENDS, STATUS, ON_COMPLETION, ORIGINATOR, CHARACTER_SET_CLIENT, COLLATION_CONNECTION, DB_COLLATION """ minimal_pos_to_quote = (0,) minimal_pos_split_quote = () _OBJECT_QUERY = """ FROM mysql.event WHERE DB = '%s' %s """ _ORDER_BY_DEFAULT = "" _ORDER_BY_NAME = "" exclude_param = "NAME" elif obj_type == _GRANT: _OBJECT_QUERY = """ ( SELECT GRANTEE, PRIVILEGE_TYPE, TABLE_SCHEMA, NULL as TABLE_NAME, NULL AS COLUMN_NAME, NULL AS ROUTINE_NAME FROM INFORMATION_SCHEMA.SCHEMA_PRIVILEGES WHERE table_schema = '%s' ) UNION ( SELECT grantee, privilege_type, table_schema, table_name, NULL, NULL FROM INFORMATION_SCHEMA.TABLE_PRIVILEGES WHERE table_schema = '%s' ) UNION ( SELECT grantee, privilege_type, table_schema, table_name, column_name, NULL FROM INFORMATION_SCHEMA.COLUMN_PRIVILEGES WHERE table_schema = '%s' ) UNION ( SELECT CONCAT('''', User, '''@''', Host, ''''), Proc_priv, Db, Routine_name, NULL, Routine_type FROM mysql.procs_priv WHERE Db = '%s' ) ORDER BY GRANTEE ASC, PRIVILEGE_TYPE ASC, TABLE_SCHEMA ASC, TABLE_NAME ASC, COLUMN_NAME ASC, ROUTINE_NAME ASC """ else: return None col_options = { 'columns': get_columns } pos_to_quote = () pos_split_quote = () if obj_type == _GRANT: query = _OBJECT_QUERY % (self.db_name, self.db_name, self.db_name, self.db_name) return self.source.exec_query(query, col_options) else: if columns == "names": prefix = _NAMES if need_backtick: pos_to_quote = names_pos_to_quote sufix = _ORDER_BY_NAME elif columns == "full": prefix = _FULL if need_backtick: pos_to_quote = full_pos_to_quote pos_split_quote = full_pos_split_quote sufix = _ORDER_BY_DEFAULT else: prefix = _MINIMAL if need_backtick: pos_to_quote = minimal_pos_to_quote pos_split_quote = minimal_pos_split_quote sufix = _ORDER_BY_DEFAULT # Form exclusion string exclude_str = "" if self.exclude_patterns: exclude_str = self.__build_exclude_patterns(exclude_param) query = (prefix + _OBJECT_QUERY + sufix) % (self.db_name, exclude_str) res = self.source.exec_query(query, col_options) # Quote required identifiers with backticks if need_backtick: new_rows = [] for row in res[1]: # Recreate row tuple quoting needed elements with backticks # Note: handle elements that can hold multiple values # quoting them separately (e.g., multiple column names). r = [] for i, data in enumerate(row): if data and i in pos_to_quote: if i in pos_split_quote: cols = data.split(',') data = ','.join( [quote_with_backticks(col, self.sql_mode) for col in cols] ) r.append(data) else: r.append(quote_with_backticks(data, self.sql_mode)) else: r.append(data) new_rows.append(tuple(r)) # set new result with with required data quoted with backticks res = (res[0], new_rows) if res and obj_type == _VIEW: res = self._get_views_sorted_by_dependencies(res, columns, not need_backtick) return res def _check_user_permissions(self, uname, host, access): """Check user permissions for a given privilege uname[in] user name to check host[in] host name of connection access[in] privilege to check (e.g. "SELECT") Returns True if user has permission, False if not """ user = User(self.source, uname + '@' + host) result = user.has_privilege(access[0], '*', access[1]) return result def check_read_access(self, user, host, options): """Check access levels for reading database objects This method will check the user's permission levels for copying a database from this server. It will also skip specific checks if certain objects are not being copied (i.e., views, procs, funcs, grants). user[in] user name to check host[in] host name to check options[in] dictionary of values to include: skip_views True = no views processed skip_proc True = no procedures processed skip_func True = no functions processed skip_grants True = no grants processed skip_events True = no events processed Returns True if user has permissions and raises a UtilDBError if the user does not have permission with a message that includes the server context. """ # Build minimal list of privileges for source access source_privs = [] priv_tuple = (self.db_name, "SELECT") source_privs.append(priv_tuple) # if views are included, we need SHOW VIEW if not options.get('skip_views', False): priv_tuple = (self.db_name, "SHOW VIEW") source_privs.append(priv_tuple) # if procs, funcs, events or grants are included, we need read on # mysql db if not options.get('skip_procs', False) or \ not options.get('skip_funcs', False) or \ not options.get('skip_events', False) or \ not options.get('skip_grants', False): priv_tuple = ("mysql", "SELECT") source_privs.append(priv_tuple) # if events, we need event if not options.get('skip_events', False): priv_tuple = (self.db_name, "EVENT") source_privs.append(priv_tuple) # if triggers, we need trigger if not options.get('skip_triggers', False): priv_tuple = (self.db_name, "TRIGGER") source_privs.append(priv_tuple) # Check permissions on source for priv in source_privs: if not self._check_user_permissions(user, host, priv): raise UtilDBError("User %s on the %s server does not have " "permissions to read all objects in %s. " % (user, self.source.role, self.db_name) + "User needs %s privilege on %s." % (priv[1], priv[0]), -1, priv[0]) return True def check_write_access(self, user, host, options, source_objects=None, do_drop=False): """Check access levels for creating and writing database objects This method will check the user's permission levels for copying a database to this server. It will also skip specific checks if certain objects are not being copied (i.e., views, procs, funcs, grants). user[in] user name to check host[in] host name to check options[in] dictionary of values to include: skip_views True = no views processed skip_proc True = no procedures processed skip_func True = no functions processed skip_grants True = no grants processed skip_events True = no events processed source_objects[in] Dictionary containing the list of objects from source database do_drop[in] True if the user is using --drop-first option Returns True if user has permissions and raises a UtilDBError if the user does not have permission with a message that includes the server context. """ if source_objects is None: source_objects = {} dest_privs = [(self.db_name, "CREATE"), (self.db_name, "ALTER"), (self.db_name, "SELECT"), (self.db_name, "INSERT"), (self.db_name, "UPDATE"), (self.db_name, "LOCK TABLES")] # Check for the --drop-first if do_drop: dest_privs.append((self.db_name, "DROP")) extra_privs = [] super_needed = False try: res = self.source.exec_query("SELECT CURRENT_USER()") dest_user = res[0][0] except UtilError as err: raise UtilError("Unable to execute SELECT current_user(). Error: " "{0}".format(err.errmsg)) # CREATE VIEW is needed for views if not options.get("skip_views", False): views = source_objects.get("views", None) if views: extra_privs.append("CREATE VIEW") for item in views: # Test if DEFINER is equal to the current user if item[6] != dest_user: super_needed = True break # CREATE ROUTINE and EXECUTE are needed for procedures if not options.get("skip_procs", False): procs = source_objects.get("procs", None) if procs: extra_privs.append("CREATE ROUTINE") extra_privs.append("EXECUTE") if not super_needed: for item in procs: # Test if DEFINER is equal to the current user if item[11] != dest_user: super_needed = True break # CREATE ROUTINE and EXECUTE are needed for functions if not options.get("skip_funcs", False): funcs = source_objects.get("funcs", None) if funcs: if "CREATE ROUTINE" not in extra_privs: extra_privs.append("CREATE ROUTINE") if "EXECUTE" not in extra_privs: extra_privs.append("EXECUTE") if not super_needed: trust_function_creators = False try: res = self.source.show_server_variable( "log_bin_trust_function_creators" ) if res and isinstance(res, list) and \ res[0][1] in ("ON", "1"): trust_function_creators = True # If binary log is enabled and # log_bin_trust_function_creators is 0, we need # SUPER privilege super_needed = self.source.binlog_enabled() and \ not trust_function_creators except UtilError as err: raise UtilDBError("ERROR: {0}".format(err.errmsg)) if not super_needed: for item in funcs: # Test if DEFINER is equal to the current user if item[11] != dest_user: super_needed = True break # EVENT is needed for events if not options.get("skip_events", False): events = source_objects.get("events", None) if events: extra_privs.append("EVENT") if not super_needed: for item in events: # Test if DEFINER is equal to the current user if item[3] != dest_user: super_needed = True break # TRIGGER is needed for events if not options.get("skip_triggers", False): triggers = source_objects.get("triggers", None) if triggers: extra_privs.append("TRIGGER") if not super_needed: for item in triggers: # Test if DEFINER is equal to the current user if item[18] != dest_user: super_needed = True break # Add SUPER privilege if needed if super_needed: dest_privs.append(("*", "SUPER")) # Add extra privileges needed for priv in extra_privs: dest_privs.append((self.db_name, priv)) if not options.get('skip_grants', False): priv_tuple = (self.db_name, "GRANT OPTION") dest_privs.append(priv_tuple) # Check privileges on destination for priv in dest_privs: if not self._check_user_permissions(user, host, priv): raise UtilDBError("User %s on the %s server does not " "have permissions to create all objects " "in %s. User needs %s privilege on %s." % (user, self.source.role, priv[0], priv[1], priv[0]), -1, priv[0]) return True mysql-utilities-1.6.4/mysql/utilities/common/grants_info.py0000644001577100752670000005054412747670311023735 0ustar pb2usercommon# # Copyright (c) 2014, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains features to check which users hold privileges, specific or not, over a given object/list of objects. """ from collections import defaultdict from mysql.utilities.common.sql_transform import (is_quoted_with_backticks, remove_backtick_quoting) _TABLE_PRIV_QUERY = ("SELECT GRANTEE, IS_GRANTABLE, " "GROUP_CONCAT(PRIVILEGE_TYPE) " "FROM INFORMATION_SCHEMA.TABLE_PRIVILEGES WHERE " "TABLE_SCHEMA='{0}' AND TABLE_NAME='{1}' " "GROUP BY GRANTEE, IS_GRANTABLE") _DB_PRIVS_QUERY = ("SELECT GRANTEE, IS_GRANTABLE, " "GROUP_CONCAT(PRIVILEGE_TYPE) " "FROM INFORMATION_SCHEMA.SCHEMA_PRIVILEGES WHERE " "TABLE_SCHEMA='{0}' GROUP BY GRANTEE, IS_GRANTABLE") _GLOBAL_PRIV_QUERY = ("SELECT grantee, IS_GRANTABLE, " "GROUP_CONCAT(privilege_type) FROM " "information_schema.USER_PRIVILEGES GROUP BY GRANTEE," " IS_GRANTABLE") _PROCS_PRIV_QUERY = ("SELECT User, Host, Proc_priv FROM " "mysql.procs_priv WHERE db='{0}' AND " "routine_name='{1}'") _GLOBAL_ALL_PRIVS = set(['SELECT', 'INSERT', 'UPDATE', 'DELETE', 'CREATE', 'DROP', 'RELOAD', 'SHUTDOWN', 'PROCESS', 'FILE', 'REFERENCES', 'INDEX', 'ALTER', 'SHOW DATABASES', 'SUPER', 'CREATE TEMPORARY TABLES', 'LOCK TABLES', 'EXECUTE', 'REPLICATION SLAVE', 'REPLICATION CLIENT', 'CREATE VIEW', 'SHOW VIEW', 'CREATE ROUTINE', 'ALTER ROUTINE', 'CREATE USER', 'EVENT', 'TRIGGER', 'CREATE TABLESPACE']) _TABLE_ALL_PRIVS = set(['SELECT', 'INSERT', 'UPDATE', 'DELETE', 'CREATE', 'DROP', 'REFERENCES', 'INDEX', 'ALTER', 'CREATE VIEW', 'SHOW VIEW', 'TRIGGER']) _DB_ALL_PRIVS = set(['SELECT', 'INSERT', 'UPDATE', 'DELETE', 'CREATE', 'DROP', 'REFERENCES', 'INDEX', 'ALTER', 'CREATE TEMPORARY TABLES', 'LOCK TABLES', 'EXECUTE', 'CREATE VIEW', 'SHOW VIEW', 'CREATE ROUTINE', 'ALTER ROUTINE', 'EVENT', 'TRIGGER']) _ROUTINE_ALL_PRIVS = set(['EXECUTE', 'ALTER ROUTINE']) DATABASE_TYPE = 'DATABASE' TABLE_TYPE = 'TABLE' PROCEDURE_TYPE = 'PROCEDURE' ROUTINE_TYPE = 'ROUTINE' FUNCTION_TYPE = 'FUNCTION' GLOBAL_TYPE = 'GLOBAL' GLOBAL_LEVEL = 3 DATABASE_LEVEL = 2 OBJECT_LEVEL = 1 ALL_PRIVS_LOOKUP_DICT = {PROCEDURE_TYPE: _ROUTINE_ALL_PRIVS, ROUTINE_TYPE: _ROUTINE_ALL_PRIVS, FUNCTION_TYPE: _ROUTINE_ALL_PRIVS, TABLE_TYPE: _TABLE_ALL_PRIVS, DATABASE_TYPE: _DB_ALL_PRIVS, GLOBAL_TYPE: _GLOBAL_ALL_PRIVS} def get_table_privs(server, db_name, table_name): """ Get the list of grantees and their privileges for a specific table. server[in] Instance of Server class, where the query will be executed. db_name[in] Name of the database where the table belongs to. table_name[in] Name of the table to check. Returns list of tuples (, ). """ tpl_lst = [] # Get sql_mode in server sql_mode = server.select_variable("SQL_MODE") # Remove backticks if necessary if is_quoted_with_backticks(db_name, sql_mode): db_name = remove_backtick_quoting(db_name, sql_mode) if is_quoted_with_backticks(table_name, sql_mode): table_name = remove_backtick_quoting(table_name, sql_mode) # Build query query = _TABLE_PRIV_QUERY.format(db_name, table_name) res = server.exec_query(query) for grantee, grant_option, grants in res: grants = set((grant.upper() for grant in grants.split(','))) # remove USAGE privilege since it does nothing. grants.discard('USAGE') if grants: if 'Y' in grant_option.upper(): grants.add('GRANT OPTION') tpl_lst.append((grantee, grants)) return tpl_lst def get_db_privs(server, db_name): """ Get the list of grantees and their privileges for a database. server[in] Instance of Server class, where the query will be executed. db_name[in] Name of the database to check. Returns list of tuples (, ). """ tpl_lst = [] # Get sql_mode in server sql_mode = server.select_variable("SQL_MODE") # remove backticks if necessary if is_quoted_with_backticks(db_name, sql_mode): db_name = remove_backtick_quoting(db_name, sql_mode) # Build query query = _DB_PRIVS_QUERY.format(db_name) res = server.exec_query(query) for grantee, grant_option, grants in res: grants = set((grant.upper() for grant in grants.split(','))) # remove USAGE privilege since it does nothing. grants.discard('USAGE') if grants: if 'Y' in grant_option.upper(): grants.add('GRANT OPTION') tpl_lst.append((grantee, grants)) return tpl_lst def get_global_privs(server): """ Get the list of grantees and their list of global privileges. server[in] Instance of Server class, where the query will be executed. Returns list of tuples (, ). """ tpl_lst = [] query = _GLOBAL_PRIV_QUERY res = server.exec_query(query) for grantee, grant_option, grants in res: grants = set((grant.upper() for grant in grants.split(','))) # remove USAGE privilege since it does nothing. grants.discard('USAGE') if grants: if 'Y' in grant_option.upper(): grants.add('GRANT OPTION') tpl_lst.append((grantee, grants)) return tpl_lst def get_routine_privs(server, db_name, routine_name): """ Get the list of grantees and their privileges for a routine. server[in] Instance of Server class, where the query will be executed. db_name[in] Name of the database where the table belongs to. routine_name[in] Name of the routine to check. Returns list of tuples (, ). """ tpl_lst = [] # Get sql_mode in server sql_mode = server.select_variable("SQL_MODE") # remove backticks if necesssary if is_quoted_with_backticks(db_name, sql_mode): db_name = remove_backtick_quoting(db_name, sql_mode) if is_quoted_with_backticks(routine_name, sql_mode): routine_name = remove_backtick_quoting(routine_name, sql_mode) # Build query query = _PROCS_PRIV_QUERY.format(db_name, routine_name) res = server.exec_query(query) for user, host, grants in res: grants = set((grant.upper() for grant in grants.split(','))) # remove USAGE privilege since it does nothing. grants.discard('USAGE') if grants: tpl_lst.append(("'{0}'@'{1}'".format(user, host), grants)) return tpl_lst def simplify_grants(grant_set, obj_type): """Replaces set of privileges with ALL PRIVILEGES, if possible grant_set[in] set of privileges. obj_type[in] type of the object to which these privileges apply. Returns a set with the simplified version of grant_set. """ # Get set with all the privileges for the specified object type. all_privs = ALL_PRIVS_LOOKUP_DICT[obj_type] # remove USAGE privilege since it does nothing and is not on the global # all privileges set of any type grant_set.discard('USAGE') # Check if grant_set has grant option and remove if before checking # if given set of privileges contains all the privileges for the # specified type grant_opt_set = set(['GRANT OPTION', 'GRANT']) has_grant_opt = bool(grant_opt_set.intersection(grant_set)) if has_grant_opt: # Remove grant option. grant_set = grant_set.difference(grant_opt_set) # Check if remaining privileges can be replaced with ALL PRIVILEGES. if all_privs == grant_set: grant_set = set(["ALL PRIVILEGES"]) if has_grant_opt: # Insert GRANT OPTION PRIVILEGE again. grant_set.add("GRANT OPTION") return grant_set def filter_grants(grant_set, obj_type_str): """This method returns a new set with just the grants that are valid to a given object type. grant_set[in] Set of grants we want to 'filter' obj_type_str[in] String with the type of the object that we are working with, must be either 'ROUTINE', 'TABLE' or 'DATABASE'. Returns a new set with just the grants that apply. """ # Get set with all the privs for obj_type all_privs_set = ALL_PRIVS_LOOKUP_DICT[obj_type_str] # Besides having all the privs from the obj_type, it can also have # 'ALL', 'ALL PRIVILEGES' and 'GRANT OPTION' all_privs_set = all_privs_set.union(['ALL', 'ALL PRIVILEGES', 'GRANT OPTION']) # By intersecting the grants we have with the object type's valid set of # grants we will obtain just the set of valid grants. return grant_set.intersection(all_privs_set) def _build_privilege_dicts(server, obj_type_dict, inherit_level=GLOBAL_LEVEL): """Builds TABLE, ROUTINE and DB dictionaries with grantee privileges server[in] Server class instance obj_type_dict[in] dictionary with the list of objects to obtain the grantee and respective grant information, organized by object type inherit_level[in] Level of inheritance that should be taken into account. It must be one of GLOBAL_LEVEL, DATABASE_LEVEL or OBJECT_LEVEL This method builds and returns the 3 dictionaries with grantee information taking into account the grant hierarchy from mysql, i.e. global grants apply to all objects and database grants apply to all the database objects (tables, procedures and functions). """ # Get the global Grants: global_grantee_lst = get_global_privs(server) # Build the Database level grants dict. # {db_name: {grantee: set(privileges)}} db_grantee_dict = defaultdict(lambda: defaultdict(set)) for db_name, _ in obj_type_dict[DATABASE_TYPE]: db_privs_lst = get_db_privs(server, db_name) for grantee, priv_set in db_privs_lst: db_grantee_dict[db_name][grantee] = priv_set if inherit_level >= GLOBAL_LEVEL: # If global inheritance level is turned on, global privileges # also apply to the database level. for grantee, priv_set in global_grantee_lst: db_grantee_dict[db_name][grantee].update( filter_grants(priv_set, DATABASE_TYPE)) # Build the table Level grants dict. # {db_name: {tbl_name: {grantee: set(privileges)}}} table_grantee_dict = defaultdict( lambda: defaultdict(lambda: defaultdict(set))) for db_name, tbl_name in obj_type_dict[TABLE_TYPE]: tbl_privs_lst = get_table_privs(server, db_name, tbl_name) for grantee, priv_set in tbl_privs_lst: table_grantee_dict[db_name][tbl_name][grantee] = priv_set # Existing db and global_grantee level privileges also apply to # the table level if inherit level is database level or higher if inherit_level >= DATABASE_LEVEL: # If we already have the privileges for the database where the # table is at, we can use that information. if db_grantee_dict[db_name]: for grantee, priv_set in db_grantee_dict[db_name].iteritems(): table_grantee_dict[db_name][tbl_name][grantee].update( filter_grants(priv_set, TABLE_TYPE)) else: # Get the grant information for the db the table is at and # merge it together with database grants. db_privs_lst = get_db_privs(server, db_name) for grantee, priv_set in db_privs_lst: table_grantee_dict[db_name][tbl_name][grantee].update( filter_grants(priv_set, TABLE_TYPE)) # Now do the same with global grants if inherit_level >= GLOBAL_LEVEL: for grantee, priv_set in global_grantee_lst: table_grantee_dict[db_name][tbl_name][grantee].update( filter_grants(priv_set, TABLE_TYPE)) # Build the ROUTINE Level grants dict. # {db_name: {proc_name: {user: set(privileges)}}} proc_grantee_dict = defaultdict( lambda: defaultdict(lambda: defaultdict(set))) for db_name, proc_name in obj_type_dict[ROUTINE_TYPE]: proc_privs_lst = get_routine_privs(server, db_name, proc_name) for grantee, priv_set in proc_privs_lst: proc_grantee_dict[db_name][proc_name][grantee] = priv_set # Existing db and global_grantee level privileges also apply to # the routine level if inherit level is database level or higher if inherit_level >= DATABASE_LEVEL: # If we already have the privileges for the database where the # routine is at, we can use that information. if db_grantee_dict[db_name]: for grantee, priv_set in db_grantee_dict[db_name].iteritems(): proc_grantee_dict[db_name][proc_name][grantee].update( filter_grants(priv_set, ROUTINE_TYPE)) else: # Get the grant information for the db the routine belongs to # and merge it together with global grants. db_privs_lst = get_db_privs(server, db_name) for grantee, priv_set in db_privs_lst: proc_grantee_dict[db_name][proc_name][grantee].update( filter_grants(priv_set, ROUTINE_TYPE)) # Now do the same with global grants. if inherit_level >= GLOBAL_LEVEL: for grantee, priv_set in global_grantee_lst: proc_grantee_dict[db_name][proc_name][grantee].update( filter_grants(priv_set, ROUTINE_TYPE)) # TODO Refactor the code below to remove code repetition. # Simplify sets of privileges for databases. for grantee_dict in db_grantee_dict.itervalues(): for grantee, priv_set in grantee_dict.iteritems(): grantee_dict[grantee] = simplify_grants(priv_set, DATABASE_TYPE) # Simplify sets of privileges for tables. for tbl_dict in table_grantee_dict.itervalues(): for grantee_dict in tbl_dict.itervalues(): for grantee, priv_set in grantee_dict.iteritems(): grantee_dict[grantee] = simplify_grants(priv_set, TABLE_TYPE) # Simplify sets of privileges for routines. for proc_dict in proc_grantee_dict.itervalues(): for grantee_dict in proc_dict.itervalues(): for grantee, priv_set in grantee_dict.iteritems(): grantee_dict[grantee] = simplify_grants(priv_set, ROUTINE_TYPE) return db_grantee_dict, table_grantee_dict, proc_grantee_dict def _has_all_privileges(query_priv_set, grantee_priv_set, obj_type): """Determines if a grantee has a certain set of privileges. query_priv_set[in] set of privileges to be tested grantee_priv_set[in] list of the privileges a grantee has over the object obj_type[in] string with the type of the object to be tested This method's purpose receives a set of privileges to test (query_priv_set), a set of privileges that a given grantee user possesses over a certain object(grantee_priv_set) and the type of that object. It returns True if the set of privileges that the user has over the object is a superset of query_priv_set. """ # If the user has GRANT OPTION and and ALL PRIVILEGES, then we can # automatically return True if ("GRANT OPTION" in grantee_priv_set and ('ALL PRIVILEGES' in grantee_priv_set or 'ALL' in grantee_priv_set)): return True # Remove USAGE privilege because it is the same has having nothing query_priv_set.discard('USAGE') # Also if query_priv_set contains ALL or ALL PRIVILEGES we can simply # discard the rest of the privileges on the set except for GRANT OPTION if 'ALL' in query_priv_set or 'ALL PRIVILEGES' in query_priv_set: query_priv_set = set(['ALL PRIVILEGES']).union( query_priv_set & set(['GRANT OPTION']) ) else: # Remove privileges that do not apply to the type of object query_priv_set = query_priv_set.intersection( ALL_PRIVS_LOOKUP_DICT[obj_type].union(["GRANT OPTION"])) return query_priv_set.issubset(grantee_priv_set) def get_grantees(server, valid_obj_type_dict, req_privileges=None, inherit_level=GLOBAL_LEVEL): """Get grantees and respective grants for the specified objects. server[in] Server class instance valid_obj_type_dict Dict with list of valid object for server, sorted by object type. We assume that each object exists on the server req_privileges[in] Optional set of required privileges inherit_level[in] Level of inheritance that should be taken into account. It must be one of GLOBAL_LEVEL, DATABASE_LEVEL or OBJECT_LEVEL """ # Build the privilege dicts db_dict, table_dict, proc_dict = _build_privilege_dicts( server, valid_obj_type_dict, inherit_level) # Build final dict with grantee/grant information, taking into account # required privileges # grantee_dict = {obj_type: {obj_name:{grantee:set_privs}}} grantee_dict = defaultdict( lambda: defaultdict(lambda: defaultdict(set))) for obj_type in valid_obj_type_dict: for db_name, obj_name in valid_obj_type_dict[obj_type]: if obj_type == DATABASE_TYPE: for grantee, priv_set in db_dict[obj_name].iteritems(): if req_privileges is not None: if _has_all_privileges(req_privileges, priv_set, obj_type): grantee_dict[obj_type][obj_name][grantee] = \ priv_set else: # No need to check if it meets privileges grantee_dict[obj_type][obj_name][grantee] = \ priv_set else: # It is either TABLE or ROUTINE and both have equal # structure dicts if obj_type == TABLE_TYPE: type_dict = table_dict else: type_dict = proc_dict for grantee, priv_set in \ type_dict[db_name][obj_name].iteritems(): # Get the full qualified name for the object f_obj_name = "{0}.{1}".format(db_name, obj_name) if req_privileges is not None: if _has_all_privileges( req_privileges, priv_set, obj_type): grantee_dict[obj_type][f_obj_name][grantee] = \ priv_set else: grantee_dict[obj_type][f_obj_name][grantee] = \ priv_set return grantee_dict mysql-utilities-1.6.4/mysql/utilities/common/options.py0000755001577100752670000015162212747670311023121 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains the following methods design to support common option parsing among the multiple utilities. Methods: setup_common_options() Setup standard options for utilities """ import copy import optparse import os.path import re from datetime import datetime from optparse import Option as CustomOption, OptionValueError from ip_parser import find_password, parse_login_values_config_path from mysql.utilities import LICENSE_FRM, VERSION_FRM from mysql.utilities.exception import UtilError, FormatError from mysql.connector.conversion import MySQLConverter from mysql.utilities.common.messages import (PARSE_ERR_OBJ_NAME_FORMAT, PARSE_ERR_OPT_INVALID_DATE, PARSE_ERR_OPT_INVALID_DATE_TIME, PARSE_ERR_OPT_INVALID_NUM_DAYS, PARSE_ERR_OPT_INVALID_VALUE, EXTERNAL_SCRIPT_DOES_NOT_EXIST, INSUFFICIENT_FILE_PERMISSIONS) from mysql.utilities.common.my_print_defaults import (MyDefaultsReader, my_login_config_exists) from mysql.utilities.common.pattern_matching import parse_object_name from mysql.utilities.common.sql_transform import (is_quoted_with_backticks, remove_backtick_quoting) _PERMITTED_FORMATS = ["grid", "tab", "csv", "vertical"] _PERMITTED_DIFFS = ["unified", "context", "differ"] _PERMITTED_RPL_DUMP = ["master", "slave"] class UtilitiesParser(optparse.OptionParser): """Special subclass of parser that allows showing of version information when --help is used. """ def print_help(self, output=None): """Show version information before help """ print self.version optparse.OptionParser.print_help(self, output) def format_epilog(self, formatter): return self.epilog if self.epilog is not None else '' def prefix_check_choice(option, opt, value): """Check option values using case insensitive prefix compare This method checks to see if the value specified is a prefix of one of the choices. It converts the string provided by the user (value) to lower case to permit case insensitive comparison of the user input. If multiple choices are found for a prefix, an error is thrown. If the value being compared does not match the list of choices, an error is thrown. option[in] Option class instance opt[in] option name value[in] the value provided by the user Returns string - valid option chosen """ # String of choices choices = ", ".join([repr(choice) for choice in option.choices]) # Get matches for prefix given alts = [alt for alt in option.choices if alt.startswith(value.lower())] if len(alts) == 1: # only 1 match return alts[0] elif len(alts) > 1: # multiple matches raise OptionValueError( ("option %s: there are multiple prefixes " "matching: %r (choose from %s)") % (opt, value, choices)) # Doesn't match. Show user possible choices. raise OptionValueError("option %s: invalid choice: %r (choose from %s)" % (opt, value, choices)) def license_callback(self, opt, value, parser, *args, **kwargs): """Show license information and exit. """ print(LICENSE_FRM.format(program=parser.prog)) parser.exit() def path_callback(option, opt, value, parser): """Verify that the given path is an existing file. If it is then add it to the parser values. option[in] option instance opt[in] option name value[in] given user value parser[in] parser instance """ if not os.path.exists(value): parser.error("the given path '{0}' in option {1} does not" " exist or can not be accessed".format(value, opt)) if not os.path.isfile(value): parser.error("the given path '{0}' in option {1} does not" " correspond to a file".format(value, opt)) setattr(parser.values, option.dest, value) def ssl_callback(option, opt, value, parser): """Verify that the given path is an existing file. If it is then add it to the parser values. option[in] option instance opt[in] option name value[in] given user value parser[in] parser instance """ if not (value == 0 or value == 1 or value == ''): parser.error("the given value '{0}' in option {1} is not" " valid, valid values are 0 or 1.".format(value, opt)) setattr(parser.values, option.dest, value) def add_config_path_option(parser): """Add the config_path option. parser[in] the parser instance """ # --config-path option: config_path parser.add_option("--config-path", action="callback", callback=path_callback, type="string", help="The path to a MySQL option file " "with the login options") def add_ssl_options(parser): """Add the ssl options. parser[in] the parser instance """ # --ssl options: ssl_ca, ssl_cert, ssl_key parser.add_option("--ssl-ca", action="callback", callback=path_callback, type="string", help="path to a file that contains " "a list of trusted SSL CAs.") parser.add_option("--ssl-cert", action="callback", callback=path_callback, type="string", help="name of the SSL certificate " "file to use for establishing a secure connection.") parser.add_option("--ssl-key", action="callback", callback=path_callback, type="string", help="name of the SSL key file to " "use for establishing a secure connection.") parser.add_option("--ssl", action="callback", callback=ssl_callback, type="int", help="specifies if the server " "connection requires use of SSL. If an encrypted " "connection cannot be established, the connection " "attempt fails. By default 0 (SSL not required).") class CaseInsensitiveChoicesOption(CustomOption): """Case insensitive choices option class This is an extension of the Option class. It replaces the check_choice method with the prefix_check_choice() method above to provide shortcut aware choice selection. It also ensures the choice compare is done with a case insensitve test. """ TYPE_CHECKER = copy.copy(CustomOption.TYPE_CHECKER) TYPE_CHECKER["choice"] = prefix_check_choice def __init__(self, *opts, **attrs): if 'choices' in attrs: attrs['choices'] = [attr.lower() for attr in attrs['choices']] CustomOption.__init__(self, *opts, **attrs) def setup_common_options(program_name, desc_str, usage_str, append=False, server=True, server_default="root@localhost:3306", extended_help=None, add_ssl=False): """Setup option parser and options common to all MySQL Utilities. This method creates an option parser and adds options for user login and connection options to a MySQL database system including user, password, host, socket, and port. program_name[in] The program name desc_str[in] The description of the utility usage_str[in] A brief usage example append[in] If True, allow --server to be specified multiple times (default = False) server[in] If True, add the --server option (default = True) server_default[in] Default value for option (default = "root@localhost:3306") extended_help[in] Extended help (by default: None). add_ssl[in] adds the --ssl-options, however these are added automatically if server is True, (default = False) Returns parser object """ program_name = program_name.replace(".py", "") parser = UtilitiesParser( version=VERSION_FRM.format(program=program_name), description=desc_str, usage=usage_str, add_help_option=False, option_class=CaseInsensitiveChoicesOption, epilog=extended_help, prog=program_name) parser.add_option("--help", action="help", help="display a help message " "and exit") parser.add_option("--license", action='callback', callback=license_callback, help="display program's license and exit") if server: # Connection information for the first server if append: parser.add_option("--server", action="append", dest="server", help="connection information for the server in " "the form: [:]@[:]" "[:] or [:]" "[:] or [<[group]>].") else: parser.add_option("--server", action="store", dest="server", type="string", default=server_default, help="connection information for the server in " "the form: [:]@[:]" "[:] or [:]" "[:] or [<[group]>].") if server or add_ssl: add_ssl_options(parser) return parser def add_character_set_option(parser): """Add the --character-set option. parser[in] the parser instance """ parser.add_option("--character-set", action="store", dest="charset", type="string", default=None, help="sets the client character set. The default is " "retrieved from the server variable " "'character_set_client'.") _SKIP_VALUES = ( "tables", "views", "triggers", "procedures", "functions", "events", "grants", "data", "create_db" ) def add_skip_options(parser): """Add the common --skip options for database utilties. parser[in] the parser instance """ parser.add_option("--skip", action="store", dest="skip_objects", default=None, help="specify objects to skip in the " "operation in the form of a comma-separated list (no " "spaces). Valid values = tables, views, triggers, proc" "edures, functions, events, grants, data, create_db") def check_skip_options(skip_list): """Check skip options for validity skip_list[in] List of items from parser option. Returns new skip list with items converted to upper case. """ new_skip_list = [] if skip_list is not None: items = skip_list.split(",") for item in items: obj = item.lower() if obj in _SKIP_VALUES: new_skip_list.append(obj) else: raise UtilError("The value %s is not a valid value for " "--skip." % item) return new_skip_list def add_format_option(parser, help_text, default_val, sql=False, extra_formats=None): """Add the format option. parser[in] the parser instance help_text[in] help text default_val[in] default value sql[in] if True, add 'sql' format default=False extra_formats[in] list with extra formats Returns corrected format value """ formats = _PERMITTED_FORMATS if sql: formats.append('sql') if extra_formats: formats.extend(extra_formats) parser.add_option("-f", "--format", action="store", dest="format", default=default_val, help=help_text, type="choice", choices=formats) def add_format_option_with_extras(parser, help_text, default_val, extra_formats): """Add the format option. parser[in] the parser instance help_text[in] help text default_val[in] default value extra_formats[in] list of additional formats to support Returns corrected format value """ formats = _PERMITTED_FORMATS formats.extend(extra_formats) parser.add_option("-f", "--format", action="store", dest="format", default=default_val, help=help_text, type="choice", choices=formats) def add_no_headers_option(parser, restricted_formats=None, help_msg=None): """Add the --no-headers option. parser[in] The parser instance. restricted_formats[in] List of formats supported by this option (only applies to them). help_msg[in] Alternative help message to use, otherwise a default one is used. """ # Create the help message according to any format restriction. if restricted_formats: plural = "s" if len(restricted_formats) > 1 else "" formats_msg = (" (only applies to format{0}: " "{1})").format(plural, ", ".join(restricted_formats)) else: formats_msg = "" if help_msg: help_msg = "{0}{1}.".format(help_msg, formats_msg) else: help_msg = "do not show column headers{0}.".format(formats_msg) # Add the option. parser.add_option("-h", "--no-headers", action="store_true", dest="no_headers", default=False, help=help_msg) def add_verbosity(parser, quiet=True): """Add the verbosity and quiet options. parser[in] the parser instance quiet[in] if True, include the --quiet option (default is True) """ parser.add_option("-v", "--verbose", action="count", dest="verbosity", help="control how much information is displayed. " "e.g., -v = verbose, -vv = more verbose, -vvv = debug") if quiet: parser.add_option("-q", "--quiet", action="store_true", dest="quiet", help="turn off all messages for quiet execution.", default=False) def check_verbosity(options): """Check to see if both verbosity and quiet are being used. """ # Warn if quiet and verbosity are both specified if options.quiet is not None and options.quiet and \ options.verbosity is not None and options.verbosity > 0: print "WARNING: --verbosity is ignored when --quiet is specified." options.verbosity = None def add_changes_for(parser, default="server1"): """Add the changes_for option. parser[in] the parser instance """ parser.add_option("--changes-for", action="store", dest="changes_for", type="choice", default=default, help="specify the " "server to show transformations to match the other " "server. For example, to see the transformation for " "transforming server1 to match server2, use " "--changes-for=server1. Valid values are 'server1' or " "'server2'. The default is 'server1'.", choices=['server1', 'server2']) def add_reverse(parser): """Add the show-reverse option. parser[in] the parser instance """ parser.add_option("--show-reverse", action="store_true", dest="reverse", default=False, help="produce a transformation report " "containing the SQL statements to transform the object " "definitions specified in reverse. For example if " "--changes-for is set to server1, also generate the " "transformation for server2. Note: the reverse changes " "are annotated and marked as comments.") def add_difftype(parser, allow_sql=False, default="unified"): """Add the difftype option. parser[in] the parser instance allow_sql[in] if True, allow sql as a valid option (default is False) default[in] the default option (default is unified) """ choice_list = ['unified', 'context', 'differ'] if allow_sql: choice_list.append('sql') parser.add_option("-d", "--difftype", action="store", dest="difftype", type="choice", default="unified", choices=choice_list, help="display differences in context format in one of " "the following formats: [%s] (default: unified)." % '|'.join(choice_list)) def add_engines(parser): """Add the engine and default-storage-engine options. parser[in] the parser instance """ # Add engine parser.add_option("--new-storage-engine", action="store", dest="new_engine", default=None, help="change all " "tables to use this storage engine if storage engine " "exists on the destination.") # Add default storage engine parser.add_option("--default-storage-engine", action="store", dest="def_engine", default=None, help="change all " "tables to use this storage engine if the original " "storage engine does not exist on the destination.") def check_engine_options(server, new_engine, def_engine, fail=False, quiet=False): """Check to see if storage engines specified in options exist. This method will check to see if the storage engine in new exists on the server. If new_engine is None, the check is skipped. If the storage engine does not exist and fail is True, an exception is thrown else if quiet is False, a warning message is printed. Similarly, def_engine will be checked and if not present and fail is True, an exception is thrown else if quiet is False a warning is printed. server[in] server instance to be checked new_engine[in] new storage engine def_engine[in] default storage engine fail[in] If True, issue exception on failure else print warning default = False quiet[in] If True, suppress warning messages (not exceptions) default = False """ def _find_engine(server, target, message, fail, default): """Find engine """ if target is not None: found = server.has_storage_engine(target) if not found and fail: raise UtilError(message) elif not found and not quiet: print message server.get_storage_engines() message = "WARNING: %s storage engine %s is not supported on the server." _find_engine(server, new_engine, message % ("New", new_engine), fail, quiet) _find_engine(server, def_engine, message % ("Default", def_engine), fail, quiet) def add_all(parser, objects): """Add the --all option. parser[in] the parser instance objects[in] name of the objects for which all includes """ parser.add_option("-a", "--all", action="store_true", dest="all", default=False, help="include all %s" % objects) def check_all(parser, options, args, objects): """Check to see if both all and specific arguments are used. This method will throw an exception if there are arguments listed and the all option has been turned on. parser[in] the parser instance options[in] command options args[in] arguments list objects[in] name of the objects for which all includes """ if options.all and len(args) > 0: parser.error("You cannot use the --all option with a list of " "%s." % objects) def add_locking(parser): """Add the --locking option. parser[in] the parser instance """ parser.add_option("--locking", action="store", dest="locking", type="choice", default="snapshot", choices=['no-locks', 'lock-all', 'snapshot'], help="choose the lock type for the operation: no-locks " "= do not use any table locks, lock-all = use table " "locks but no transaction and no consistent read, " "snaphot (default): consistent read using a single " "transaction.") def add_exclude(parser, object_type="objects", example1="db1.t1", example2="db1.t% or db%.%"): """Add the --exclude option. parser[in] the parser instance example1[in] example2[in] """ parser.add_option("-x", "--exclude", action="append", dest="exclude", type="string", default=None, help="exclude one or more " "{0} from the operation using either a specific " "name (e.g. {1}), a LIKE pattern (e.g. {2}) or a REGEXP " "search pattern. To use a REGEXP search pattern for all " "exclusions, you must also specify the --regexp option. " "Repeat the --exclude option for multiple exclusions." "".format(object_type, example1, example2)) def check_exclude_pattern(exclude_list, use_regexp): """Check the --exclude pattern to determine if there are special symbols that may be regexp symbols and the --use-regexp option is not specified. Prints warning if this is true. parser[in] the parser instance use_regexp[in] the option to use regexp """ # ignore null lists if not exclude_list: return True for row in exclude_list: # replace _ and % and see if still not alnum() test = row.replace('_', '').replace('%', '').replace('`', '') test = test.replace("'", "").replace('.', '').replace('"', '') if len(test) > 0 and not test.isalnum() and not use_regexp: print "# WARNING: One or more of your --exclude patterns " \ "contains symbols that could be regexp patterns. You may " \ "need to include --regexp to ensure your exclude pattern " \ "is evaluated as REGEXP and not a SQL LIKE expression." return False return True def add_regexp(parser): """Add the --regexp option. parser[in] the parser instance """ parser.add_option("-G", "--basic-regexp", "--regexp", dest="use_regexp", action="store_true", default=False, help="use 'REGEXP' " "operator to match pattern. Default is to use 'LIKE'.") def add_rpl_user(parser): """Add the --rpl-user option. parser[in] the parser instance """ parser.add_option("--rpl-user", action="store", dest="rpl_user", type="string", help="the user and password for the replication " "user requirement, in the form: [:]" " or . E.g. rpl:passwd") def add_rpl_mode(parser, do_both=True, add_file=True): """Add the --rpl and --rpl-file options. parser[in] the parser instance do_both[in] if True, include the "both" value for the --rpl option Default = True add_file[in] if True, add the --rpl-file option Default = True """ rpl_mode_both = "" rpl_mode_options = _PERMITTED_RPL_DUMP if do_both: rpl_mode_options.append("both") rpl_mode_both = (", and 'both' = include 'master' and 'slave' options " "where applicable") parser.add_option("--rpl", "--replication", dest="rpl_mode", action="store", help="include replication information. " "Choices: 'master' = include the CHANGE MASTER command " "using the source server as the master, " "'slave' = include the CHANGE MASTER command for " "the source server's master (only works if the source " "server is a slave){0}.".format(rpl_mode_both), choices=rpl_mode_options) if add_file: parser.add_option("--rpl-file", "--replication-file", dest="rpl_file", action="store", help="path and file name to place " "the replication information generated. Valid on if " "the --rpl option is specified.") def check_rpl_options(parser, options): """Check replication dump options for validity This method ensures the optional --rpl-* options are valid only when --rpl is specified. parser[in] the parser instance options[in] command options """ if options.rpl_mode is None: errors = [] if parser.has_option("--comment-rpl") and options.rpl_file is not None: errors.append("--rpl-file") if options.rpl_user is not None: errors.append("--rpl-user") # It's Ok if the options do not include --comment-rpl if parser.has_option("--comment-rpl") and options.comment_rpl: errors.append("--comment-rpl") if len(errors) > 1: num_opt_str = "s" else: num_opt_str = "" if len(errors) > 0: parser.error("The %s option%s must be used with the --rpl " "option." % (", ".join(errors), num_opt_str)) def add_discover_slaves_option(parser): """Add the --discover-slaves-login option. This method adds the --discover-slaves-login option that is used to discover the list of slaves associated to the specified login (user and password). parser[in] the parser instance. """ parser.add_option("--discover-slaves-login", action="store", dest="discover", default=None, type="string", help="at startup, query master for all registered " "slaves and use the user name and password specified to " "connect. Supply the user and password in the form " "[:] or . For example, " "--discover-slaves-login=joe:secret will use 'joe' as " "the user and 'secret' as the password for each " "discovered slave.") def add_log_option(parser): """Add the --log option. This method adds the --log option that is used the specify the target file for logging messages from the utility. parser[in] the parser instance. """ parser.add_option("--log", action="store", dest="log_file", default=None, type="string", help="specify a log file to use for " "logging messages") def add_master_option(parser): """Add the --master option. This method adds the --master option that is used to specify the connection string for the server with the master role. parser[in] the parser instance. """ parser.add_option("--master", action="store", dest="master", default=None, type="string", help="connection information for master " "server in the form: [:]@[:]" "[:] or [:][:]" " or [<[group]>].") def add_slaves_option(parser): """Add the --slaves option. This method adds the --slaves option that is used to specify a list of slaves, more precisely their connection strings (separated by comma). parser[in] the parser instance. """ parser.add_option("--slaves", action="store", dest="slaves", type="string", default=None, help="connection information for slave servers in " "the form: [:]@[:]" "[:] or [:][:]" " or [<[group]>]. " "List multiple slaves in comma-separated list.") def add_failover_options(parser): """Add the common failover options. This adds the following options: --candidates --discover-slaves-login --exec-after --exec-before --log --log-age --master --max-position --ping --seconds-behind --slaves --timeout --script-threshold parser[in] the parser instance """ parser.add_option("--candidates", action="store", dest="candidates", type="string", default=None, help="connection information for candidate slave servers" " for failover in the form: [:]@[:" "][:] or [:][:]" " or [<[group]>]" " Valid only with failover command. List multiple slaves" " in comma-separated list.") add_discover_slaves_option(parser) parser.add_option("--exec-after", action="store", dest="exec_after", default=None, type="string", help="name of script to " "execute after failover or switchover") parser.add_option("--exec-before", action="store", dest="exec_before", default=None, type="string", help="name of script to " "execute before failover or switchover") add_log_option(parser) parser.add_option("--log-age", action="store", dest="log_age", default=7, type="int", help="specify maximum age of log entries in " "days. Entries older than this will be purged on " "startup. Default = 7 days.") add_master_option(parser) parser.add_option("--max-position", action="store", dest="max_position", default=0, type="int", help="used to detect slave " "delay. The maximum difference between the master's " "log position and the slave's reported read position of " "the master. A value greater than this means the slave " "is too far behind the master. Default is 0.") parser.add_option("--ping", action="store", dest="ping", default=None, help="Number of ping attempts for detecting downed " "server.") parser.add_option("--seconds-behind", action="store", dest="max_delay", default=0, type="int", help="used to detect slave " "delay. The maximum number of seconds behind the master " "permitted before slave is considered behind the " "master. Default is 0.") add_slaves_option(parser) parser.add_option("--timeout", action="store", dest="timeout", default=300, help="maximum timeout in seconds to wait for each " "replication command to complete. For example, timeout " "for slave waiting to catch up to master. " "Default = 300.") parser.add_option("--script-threshold", action="store", default=None, dest="script_threshold", help="Value for external scripts to trigger aborting " "the operation if result is greater than or equal to " "the threshold. Default = None (no threshold " "checking).") def check_server_lists(parser, master, slaves): """Check to see if master is listed in slaves list Returns bool - True = master not in slaves, issue error if it appears """ if slaves: for slave in slaves.split(',', 1): if master == slave: parser.error("You cannot list the master as a slave.") return True def obj2sql(obj): """Convert a Python object to an SQL object. This function convert Python objects to SQL values using the conversion functions in the database connector package.""" return MySQLConverter().quote(obj) def parse_user_password(userpass_values, my_defaults_reader=None, options=None): """ This function parses a string with the user/password credentials. This function parses the login string, determines the used format, i.e. user[:password], config-path or login-path. If the ':' (colon) is not in the login string, the it can refer to a config-path, login-path or to a username (without a password). In this case, first it is assumed that the specified value is a config-path and tries to retrive the user and password from the configuration file secondly assume it is a login-path and the function attempts to retrieve the associated username and password, in a quiet way (i.e., without raising exceptions). If it fails to retrieve the login-path data, then the value is assumed to be a username. userpass_values[in] String indicating the user/password credentials. It must be in the form: user[:password] or login-path. my_defaults_reader[in] Instance of MyDefaultsReader to read the information of the login-path from configuration files. By default, the value is None. options[in] Dictionary of options (e.g. basedir), from the used utility. By default, it set with an empty dictionary. Note: also supports options values from optparse. Returns a tuple with the username and password. """ if options is None: options = {} # Split on the first ':' to determine if a login-path is used. login_values = userpass_values.split(':', 1) if len(login_values) == 1: # Format is config-path, login-path or user (without a password): # First check if the value is a config-path # The following method call also initializes the user and passwd with # default values in case the login_values are not from a config-path user, passwd = parse_login_values_config_path(login_values[0], quietly=True) # Second assume it's a login-path and quietly try to retrieve the user # and password, in case of success overwrite the values previously set # and in case of failure return these ones instead. # Check if the login configuration file (.mylogin.cnf) exists if login_values[0] and not my_login_config_exists(): return user, passwd if not my_defaults_reader: # Attempt to create the MyDefaultsReader try: my_defaults_reader = MyDefaultsReader(options) except UtilError: # Raise an UtilError when my_print_defaults tool is not found. return user, passwd elif not my_defaults_reader.tool_path: # Try to find the my_print_defaults tool try: my_defaults_reader.search_my_print_defaults_tool() except UtilError: # Raise an UtilError when my_print_defaults tool is not found. return user, passwd # Check if the my_print_default tool is able to read a login-path from # the mylogin configuration file if not my_defaults_reader.check_login_path_support(): return user, passwd # Read and parse the login-path data (i.e., user and password) try: loginpath_data = my_defaults_reader.get_group_data(login_values[0]) if loginpath_data: user = loginpath_data.get('user', None) passwd = loginpath_data.get('password', None) return user, passwd else: return user, passwd except UtilError: # Raise an UtilError if unable to get the login-path group data return user, passwd elif len(login_values) == 2: # Format is user:password; return a tuple with the user and password return login_values[0], login_values[1] else: # Invalid user credentials format raise FormatError("Unable to parse the specified user credentials " "(accepted formats: [: or " "): %s" % userpass_values) def add_basedir_option(parser): """ Add the --basedir option. """ parser.add_option("--basedir", action="store", dest="basedir", default=None, type="string", help="the base directory for the server") def check_dir_option(parser, opt_value, opt_name, check_access=False, read_only=False): """ Check if the specified directory option is valid. Check if the value specified for the option is a valid directory, and if the user has appropriate access privileges. An appropriate parser error is issued if the specified directory is invalid. parser[in] Instance of the option parser (optparse). opt_value[in] Value specified for the option. opt_name[in] Option name (e.g., --basedir). check_access[in] Flag specifying if the access privileges need to be checked. By default, False (no access check). read_only[in] Flag indicating if the access required is only for read or read/write. By default, False (read/write access). Note: only used if check_access=True. Return the absolute path for the specified directory or None if an empty value is specified. """ # Check existence of specified directory. if opt_value: full_path = get_absolute_path(opt_value) if not os.path.isdir(full_path): parser.error("The specified path for {0} option is not a " "directory: {1}".format(opt_name, opt_value)) if check_access: mode = os.R_OK if read_only else os.R_OK | os.W_OK if not os.access(full_path, mode): parser.error("You do not have enough privileges to access the " "folder specified by {0}.".format(opt_name)) return full_path return None def check_script_option(parser, opt_value, check_executable=True): """ Check if the specified script option is valid. Check if the script specified for the option exists, and if the user has appropriate access privileges to it. An appropriate parser error is issued if the specified directory does not exist or is not executable. parser[in] Instance of the option parser (optparse). opt_value[in] Value specified for the option. check_executable[in] Flag specifying if the executable privileges need to be checked. By default, True(needs to be executable). Return the absolute path for the specified script or None if an empty value is specified. """ if opt_value: abs_path = os.path.abspath(opt_value) if not os.path.isfile(abs_path): parser.error(EXTERNAL_SCRIPT_DOES_NOT_EXIST.format( path=opt_value)) if check_executable and not os.access(abs_path, os.X_OK): parser.error(INSUFFICIENT_FILE_PERMISSIONS.format( path=opt_value, permissions='execute')) return opt_value else: return None def get_absolute_path(path): """ Returns the absolute path. """ return os.path.abspath(os.path.expanduser(os.path.normpath(path))) def db_objects_list_to_dictionary(parser, obj_list, option_desc, db_over_tables=True, sql_mode=''): """Process database object list and convert to a dictionary. Check the qualified name format of the given database objects and convert the given list of object to a dictionary organized by database names and sets of specific objects. Note: It is assumed that the given object list is obtained from the arguments or an option returned by the parser. parser[in] Instance of the used option/arguments parser obj_list[in] List of objects to process. option_desc[in] Short description of the option for the object list (e.g., "the --exclude option", "the database/table arguments") to refer appropriately in any parsing error. db_over_tables[in] If True specifying a db alone overrides all occurrences of table objects from that db (e.g. if True and we have both db and db.table1, db.table1 is ignored). returns a dictionary with the objects grouped by database (without duplicates). None value associated to a database entry means that all objects are to be considered. E.g. {'db_name1': set(['table1','table2']), 'db_name2': None}. """ db_objs_dict = {} for obj_name in obj_list: m_objs = parse_object_name(obj_name, sql_mode) if m_objs[0] is None: parser.error(PARSE_ERR_OBJ_NAME_FORMAT.format( obj_name=obj_name, option=option_desc )) else: db_name, obj_name = m_objs # Remove backtick quotes. db_name = remove_backtick_quoting(db_name, sql_mode) \ if is_quoted_with_backticks(db_name, sql_mode) else db_name obj_name = remove_backtick_quoting(obj_name, sql_mode) \ if obj_name and is_quoted_with_backticks(obj_name, sql_mode) \ else obj_name # Add database object to result dictionary. if not obj_name: # If only the database is specified and db_over_tables is True, # then add entry with db name and value None (to include all # objects) even if a previous specific object was already # added, else if db_over_tables is False, add None value to the # list, so that we know db was specified without any # table/routine. if db_name in db_objs_dict: if db_objs_dict[db_name] and not db_over_tables: db_objs_dict[db_name].add(None) else: db_objs_dict[db_name] = None else: if db_over_tables: db_objs_dict[db_name] = None else: db_objs_dict[db_name] = set([None]) else: # If a specific object object is given add it to the set # associated to the database, except if the database entry # is None (meaning that all objects are included). if db_name in db_objs_dict: if db_objs_dict[db_name]: db_objs_dict[db_name].add(obj_name) else: db_objs_dict[db_name] = set([obj_name]) return db_objs_dict def get_ssl_dict(parser_options=None): """Returns a dictionary with the SSL certificates parser_options[in] options instance from the used option/arguments parser Returns a dictionary with the SSL certificates, each certificate name as the key with underscore instead of dash. If no certificate has been given by the user in arguments, returns an empty dictionary. Note: parser_options is a Values instance, that does not have method get as a dictionary instance. """ conn_options = {} if parser_options is not None: certs_paths = {} if 'ssl_ca' in dir(parser_options): certs_paths['ssl_ca'] = parser_options.ssl_ca if 'ssl_cert' in dir(parser_options): certs_paths['ssl_cert'] = parser_options.ssl_cert if 'ssl_key' in dir(parser_options): certs_paths['ssl_key'] = parser_options.ssl_key if 'ssl' in dir(parser_options): certs_paths['ssl'] = parser_options.ssl conn_options.update(certs_paths) return conn_options def get_value_intervals_list(parser, option_value, option_name, value_name): """Get and check the list of values for the given option. Convert the string value for the given option to the corresponding list of integer values and tuple of integers (for intervals). For example, converts the option_value '3,5-8,11' to the list [3, (5,8), 11]. A parser error is issued if the used values or format are invalid. parser[in] Instance of the used option/arguments parser. option_value[in] Value specified for the option (e.g., '3,5-8,11'). option_name[in] Name of the option (e.g., '--status'). value_name[in] Name describing each option value (e.g., 'status'). Returns a list of integers and tuple of integers (for intervals) representing the given option value string. """ # Filter empty values and convert all to integers (values and intervals). values = option_value.split(",") values = [value for value in values if value] if not len(values) > 0: parser.error(PARSE_ERR_OPT_INVALID_VALUE.format(option=option_name, value=option_value)) res_list = [] for value in values: interval = value.split('-') if len(interval) == 2: # Convert lower and higher value of the interval. try: lv = int(interval[0]) except ValueError: parser.error("Invalid {0} value '{1}' (must be a " "non-negative integer) for interval " "'{2}'.".format(value_name, interval[0], value)) try: hv = int(interval[1]) except ValueError: parser.error("Invalid {0} value '{1}' (must be a " "non-negative integer) for interval " "'{2}'.".format(value_name, interval[1], value)) # Add interval (tuple) to the list. res_list.append((lv, hv)) elif len(interval) == 1: # Add single value to the status list. try: res_list.append(int(value)) except ValueError: parser.error("Invalid {0} value '{1}' (must be a " "non-negative integer).".format(value_name, value)) else: # Invalid format. parser.error("Invalid format for {0} interval (a single " "dash must be used): '{1}'.".format(value_name, value)) return res_list def check_date_time(parser, date_value, date_type, allow_days=False): """Check the date/time value for the given option. Check if the date/time value for the option is valid. The supported formats are 'yyyy-mm-ddThh:mm:ss' and 'yyyy-mm-dd'. If the allow days flag is ON then an integer valuse representing the number of days is also accepted. A parser error is issued if the date/time value is invalid. parser[in] Instance of the used option/arguments parser. date_value[in] Date/time value specified for the option. date_type[in] Name describing the type of date being checked (e.g., start, end, modified). allow_days[in] Flag indicating if the specified value can also be an integer representing the number of of days (> 0). Returns the date in the format 'yyyy-mm-ddThh:mm:ss' or an integer representing the number of days. """ if allow_days: # Check if it is a valid number of days. try: days = int(date_value) except ValueError: # Not a valid integer (i.e., number of days). days = None if days: if days < 1: parser.error(PARSE_ERR_OPT_INVALID_NUM_DAYS.format( date_type, date_value)) return days # Check if it is a valid date/time format. _, _, time = date_value.partition("T") if time: try: dt_date = datetime.strptime(date_value, '%Y-%m-%dT%H:%M:%S') except ValueError: parser.error(PARSE_ERR_OPT_INVALID_DATE_TIME.format(date_type, date_value)) else: try: dt_date = datetime.strptime(date_value, '%Y-%m-%d') except ValueError: parser.error(PARSE_ERR_OPT_INVALID_DATE.format(date_type, date_value)) return dt_date.strftime('%Y-%m-%dT%H:%M:%S') def check_gtid_set_format(parser, gtid_set): """Check the format of the GTID set given for the option. Perform some basic checks to verify the syntax of the specified string for the GTID set value. A parse error is issued if the format is incorrect. parser[in] Instance of the used option/arguments parser. gtid_set[in] GTID set value specified for the option. """ # UUID format: hhhhhhhh-hhhh-hhhh-hhhh-hhhhhhhhhhhh re_uuid = re.compile( r"(?:[a-f]|\d){8}(?:-(?:[a-f]|\d){4}){3}-(?:[a-f]|\d){12}", re.IGNORECASE) # interval format: n[-n] re_interval = re.compile(r"(?:\d+)(?:-\d+)?") uuid_sets = gtid_set.split(',') for uuid_set in uuid_sets: uuid_set_elements = uuid_set.split(':') if len(uuid_set_elements) < 2: parser.error("Invalid GTID set '{0}' for option --gtid-set, " "missing UUID or interval. Valid format: " "uuid:interval[:interval].".format(uuid_set)) # Check server UUID format. if not re_uuid.match(uuid_set_elements[0]): parser.error("Invalid UUID '{0}' for option --gtid-set. Valid " "format: hhhhhhhh-hhhh-hhhh-hhhh-hhhhhhhhhhhh." "".format(uuid_set_elements[0])) # Check intervals. for interval in uuid_set_elements[1:]: if not re_interval.match(interval): parser.error("Invalid interval '{0}' for option --gtid-set. " "Valid format: n[-n].".format(interval)) try: start_val, end_val = interval.split('-') if int(start_val) >= int(end_val): parser.error( "Invalid interval '{0}' for option --gtid-set. Start " "value must be lower than the end value." "".format(interval)) except ValueError: # Error raised for intervals with a single value. pass # Ignore no need to compare start and end value. def check_password_security(options, args, prefix=""): """Check command line for passwords and report a warning. This method checks all options for passwords in the form ':%@'. If this pattern is found, the method with issue a warning to stdout and return True, else it returns False. Note: this allows us to make it possible to abort if command-line passwords are found (not the default...yet). options[in] list of options args[in] list of arguments prefix[in] (optional) allows preface statement with # or something for making the message a comment in-stream Returns - bool : False = no passwords, True = password found and msg shown """ result = False for value in options.__dict__.values(): if type(value) == list: for item in value: if find_password(item): result = True else: if find_password(value): result = True for arg in args: if find_password(arg): result = True if result: print("{0}WARNING: Using a password on the command line interface" " can be insecure.".format(prefix)) return result mysql-utilities-1.6.4/mysql/utilities/common/my_print_defaults.py0000644001577100752670000002775112747670311025160 0ustar pb2usercommon# # Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module provides features to read MySQL configuration files, wrapping the tool my_print_defaults. """ import optparse import os.path import re import subprocess import tempfile from mysql.utilities.common.tools import get_tool_path from mysql.utilities.exception import UtilError _MY_PRINT_DEFAULTS_TOOL = "my_print_defaults" MYLOGIN_FILE = ".mylogin.cnf" def my_login_config_path(): """Return the default path of the mylogin file (.mylogin.cnf). """ if os.name == 'posix': # File located in $HOME for non-Windows systems return os.path.expanduser('~') else: # File located in %APPDATA%\MySQL for Windows systems return r'{0}\MySQL'.format(os.environ['APPDATA']) def my_login_config_exists(): """Check if the mylogin file (.mylogin.cnf) exists. """ my_login_fullpath = os.path.normpath(os.path.join(my_login_config_path(), MYLOGIN_FILE)) return os.path.isfile(my_login_fullpath) class MyDefaultsReader(object): """The MyDefaultsReader class is used to read the data stored from a MySQL configuration file. This class provide methods to read the options data stored in configurations files, using the my_print_defaults tool. To learn more about my_print_defaults see: http://dev.mysql.com/doc/en/my-print-defaults.html """ def __init__(self, options=None, find_my_print_defaults_tool=True): """Constructor options[in] dictionary of options (e.g. basedir). Note, allows options values from optparse to be passed directly to this parameter. find_my_print_defaults[in] boolean value indicating if the tool my_print_defaults should be located upon initialization of the object. """ if options is None: options = {} # _config_data is a dictionary of option groups containing a dictionary # of the options data read from the configuration file. self._config_data = {} # Options values from optparse can be directly passed, check if it is # the case and handle them correctly. if isinstance(options, optparse.Values): try: self._basedir = options.basedir # pylint: disable=E1103 except AttributeError: # if the attribute is not found, then set it to None (default). self._basedir = None try: # if the attribute is not found, then set it to 0 (default). self._verbosity = options.verbosity # pylint: disable=E1103 except AttributeError: self._verbosity = 0 else: self._basedir = options.get("basedir", None) self._verbosity = options.get("verbosity", 0) if find_my_print_defaults_tool: self.search_my_print_defaults_tool() else: self._tool_path = None @property def tool_path(self): """Sets tool_path property """ return self._tool_path def search_my_print_defaults_tool(self, search_paths=None): """Search for the tool my_print_defaults. """ if not search_paths: search_paths = [] # Set the default search paths (i.e., default location of the # .mylogin.cnf file). default_paths = [my_login_config_path()] # Extend the list of path to search with the ones specified. if search_paths: default_paths.extend(search_paths) # Search for the tool my_print_defaults. try: self._tool_path = get_tool_path(self._basedir, _MY_PRINT_DEFAULTS_TOOL, defaults_paths=default_paths, search_PATH=True) except UtilError as err: raise UtilError("Unable to locate MySQL Client tools. " "Please confirm that the path to the MySQL client " "tools are included in the PATH. Error: %s" % err.errmsg) def check_show_required(self): """Check if the '--show' password option is required/supported by this version of the my_print_defaults tool. At MySQL Server 5.6.25 and 5.7.8, my_print_defaults' functionality changed to mask passwords by default and added the '--show' password option to display passwords in cleartext (BUG#19953365, BUG#20903330). As this module requires the password to be displayed as cleartext to extract the password, the use of the '--show' password option is also required starting on these version of the server, however the my_print_defaults tool version did not increase with this change, so this method looks at the output of the help text of my_print_defaults tool to determine if the '--show' password option is supported by the my_print_defaults tool available at _tool_path. Returns True if this version of the tool supports the'--show' password option, otherwise False. """ # The path to the tool must have been previously found. assert self._tool_path, ("First, the required MySQL tool must be " "found. E.g., use method " "search_my_print_defaults_tool.") # Create a temporary file to redirect stdout out_file = tempfile.TemporaryFile() if self._verbosity > 0: subprocess.call([self._tool_path, "--help"], stdout=out_file) else: # Redirect stderr to null null_file = open(os.devnull, "w+b") subprocess.call([self._tool_path, "--help"], stdout=out_file, stderr=null_file) # Read my_print_defaults help output text out_file.seek(0) lines = out_file.readlines() out_file.close() # find the "--show" option used to show passwords in plain text. for line in lines: if "--show" in line: return True # The option was not found in the tool help output. return False def check_tool_version(self, major_version, minor_version): """Check the version of the my_print_defaults tool. Returns True if the version of the tool is equal or above the one that is specified, otherwise False. """ # The path to the tool must have been previously found. assert self._tool_path, ("First, the required MySQL tool must be " "found. E.g., use method " "search_my_print_defaults_tool.") # Create a temporary file to redirect stdout out_file = tempfile.TemporaryFile() if self._verbosity > 0: subprocess.call([self._tool_path, "--version"], stdout=out_file) else: # Redirect stderr to null null_file = open(os.devnull, "w+b") subprocess.call([self._tool_path, "--version"], stdout=out_file, stderr=null_file) # Read --version output out_file.seek(0) line = out_file.readline() out_file.close() # Parse the version value match = re.search(r'(?:Ver )(\d)\.(\d)', line) if match: major, minor = match.groups() if (major_version < int(major)) or \ (major_version == int(major) and minor_version <= int(minor)): return True else: return False else: raise UtilError("Unable to determine tool version - %s" % self._tool_path) def check_login_path_support(self): """Checks if the used my_print_defaults tool supports login-paths. """ # The path to the tool must have been previously found. assert self._tool_path, ("First, the required MySQL tool must be " "found. E.g., use method " "search_my_print_defaults_tool.") # Create a temporary file to redirect stdout out_file = tempfile.TemporaryFile() if self._verbosity > 0: subprocess.call([self._tool_path, "--help"], stdout=out_file) else: # Redirect stderr to null null_file = open(os.devnull, "w+b") subprocess.call([self._tool_path, "--help"], stdout=out_file, stderr=null_file) # Read --help output out_file.seek(0) help_output = out_file.read() out_file.close() # Check the existence of a "login-path" option if 'login-path' in help_output: return True else: return False def _read_group_data(self, group): """Read group options data using my_print_defaults tool. """ # The path to the tool must have been previously found. assert self._tool_path, ("First, the required MySQL tool must be " "found. E.g., use method " "search_my_print_defaults_tool.") mp_cmd = [self._tool_path, group] if self.check_show_required(): mp_cmd.append("--show") # Group not found; use my_print_defaults to get group data. out_file = tempfile.TemporaryFile() if self._verbosity > 0: subprocess.call(mp_cmd, stdout=out_file) else: # Redirect stderr to null null_file = open(os.devnull, "w+b") subprocess.call(mp_cmd, stdout=out_file, stderr=null_file) # Read and parse group options values. out_file.seek(0) results = [] for line in out_file: # Parse option value; ignore starting "--" key_value = line[2:].split("=", 1) if len(key_value) == 2: # Handle option format: --key=value and --key= results.append((key_value[0], key_value[1].strip())) elif len(key_value) == 1: # Handle option format: --key results.append((key_value[0], True)) else: raise UtilError("Invalid option value format for " "group %s: %s" % (group, line)) out_file.close() if len(results): self._config_data[group] = dict(results) else: self._config_data[group] = None return self._config_data[group] def get_group_data(self, group): """Retrieve the data associated to the given group. """ # Returns group's data locally stored, if available. try: return self._config_data[group] except KeyError: # Otherwise, get it using my_print_defaults. return self._read_group_data(group) def get_option_value(self, group, opt_name): """Retrieve the value associated to the given opt_name in the group. """ # Get option value, if group's data is available. grp_options = self.get_group_data(group) if grp_options: return grp_options.get(opt_name, None) else: return None mysql-utilities-1.6.4/mysql/utilities/common/utilities.py0000644001577100752670000005172612747670311023442 0ustar pb2usercommon# # Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains classes and functions used to determine what MySQL utilities are installed, their options, and usage. This module can be used to allow a client to provide auto type and option completion. """ import glob import os import sys import re import subprocess from mysql.utilities import AVAILABLE_UTILITIES from mysql.utilities.common.format import print_dictionary_list from mysql.utilities.common.tools import check_python_version from mysql.utilities.exception import UtilError _MAX_WIDTH = 78 # These utilities should not be used with the console _EXCLUDE_UTILS = ['mysqluc', ] RE_USAGE = ( r"(?P.*?)" r"(?PUsage:\s.*?)\w+\s\-\s" # This match first # section matching all till find a " - " r"(?P.*?)" # Description is the text next # to " - " and till next match. r"(?P\w*):" # This is beginning of Options section r"(?P.*(?=^Introduction.\-{12})|.*$)" # match Options till end or till find Introduction -. r"(?:^Introduction.\-{12}){0,1}" # not catching group r"(?P.*(?=^Helpful\sHints.\-{13})|.*$)" # captures Introduction (optional) # it will match Introduction till end or till Hints - r"(?:^Helpful\sHints.\-{13}){0,1}" # Not catching group r"(?P.*)" # captures Helpful Hints (optional) ) RE_OPTIONS = ( r"^(?P\s\s\-.*?)\s{2,}" # Option Alias # followed by 2 o more spaces is his description r"(?P.*?)(?=^\s\s\-)" # description is all # text till not found other alias in the form # <-|--Alias> at the begin of the line. ) RE_OPTION = r"\s+\-\-(.*?)\s" # match Alias of the form <--Alias> RE_ALIAS = r"\s+\-(\w+)\s*" # match Alias of the form <-Alias> WARNING_FAIL_TO_READ_OPTIONS = ("WARNING: {0} failed to read options." " This utility will not be shown in 'help " "utilities' and cannot be accessed from the " "console.") def get_util_path(default_path=''): """Find the path to the MySQL utilities This method will attempt to default_path[in] provides known location of utilities if provided, method will search this location first before searching PYTHONPATH Returns string - path to utilities or None if not found """ def _search_paths(needles, paths): """Search and return normalized path """ for path in paths: norm_path = os.path.normpath(path) hay_stack = [os.path.join(norm_path, n) for n in needles] for needle in hay_stack: if os.path.isfile(needle): return norm_path return None needle_name = 'mysqlreplicate' needles = [needle_name + ".py"] if os.name == "nt": needles.append(needle_name + ".exe") else: needles.append(needle_name) # Try the default by itself path_found = _search_paths(needles, [default_path]) if path_found: return path_found # Try the pythonpath environment variable pythonpath = os.getenv("PYTHONPATH") if pythonpath: # This is needed on windows without a python setup, cause needs to # find the executable scripts. path = _search_paths(needles, [os.path.join(n, "../") for n in pythonpath.split(";", 1)]) if path: return path path = _search_paths(needles, pythonpath.split(";", 1)) if path: return path # Try the system paths path_found = _search_paths(needles, sys.path) if path_found: return path_found return None class Utilities(object): """The utilities class can be used to discover what utilities are installed on the system as well as the usage and options for each utility. The list of utilities are read at initialization. This class is designed to support the following operations: get_util_matches() - find all utilities that match a prefix get_option_matches() - find all options that match a prefix for a given utility get_usage() - return the usage statement for a given utility show_utilities() - display a 2-column list of utilities and their descriptions show_options() - display a 2-column list of the options for a given utility including the name and description of each option """ def __init__(self, options=None): """Constructor """ if options is None: options = {} self.util_list = [] self.width = options.get('width', _MAX_WIDTH) self.util_path = get_util_path(options.get('utildir', '')) self.extra_utilities = options.get('add_util', {}) self.hide_utils = options.get('hide_util', False) self.program_usage = re.compile(RE_USAGE, re.S | re.M) self.program_options = re.compile(RE_OPTIONS, re.S | re.M) self.program_option = re.compile(RE_OPTION) self.program_name = re.compile(RE_ALIAS) self.util_cmd_dict = {} self.posible_utilities = {} self.posible_utilities.update(AVAILABLE_UTILITIES) if self.extra_utilities and self.hide_utils: self.posible_utilities = self.extra_utilities else: self.posible_utilities.update(self.extra_utilities) self.available_utilities = self.posible_utilities for util_name, ver_compatibility in self.posible_utilities.iteritems(): name_utility = "{0} utility".format(util_name) if ver_compatibility: min_v, max_v = ver_compatibility res = check_python_version(min_version=min_v, max_version=max_v, name=name_utility, print_on_fail=False, exit_on_fail=False, return_error_msg=True) else: res = check_python_version(name=name_utility, print_on_fail=False, exit_on_fail=False, return_error_msg=True) if isinstance(res, tuple): is_compat, error_msg = res if not is_compat: self.available_utilities.remove(util_name) print(WARNING_FAIL_TO_READ_OPTIONS.format(util_name)) print("ERROR: {0}\n".format(error_msg)) continue self._find_utility_cmd(util_name) @staticmethod def find_executable(util_name): """Search the system path for an executable matching the utility util_name[in] Name of utility Returns string - name of executable (util_name or util_name.exe) or original name if not found on the system path """ paths = os.getenv("PATH").split(os.pathsep) for path in paths: new_path = os.path.join(path, util_name + "*") if os.name == "nt": new_path = '"{0}"'.format(new_path) found_path = glob.glob(new_path) if found_path: return os.path.split(found_path[0])[1] return util_name def _find_utility_cmd(self, utility_name): """ Locate the utility scripts util_name[in] utility to find This method builds a dict of commands for invoke the utilities. """ util_path = self.find_executable(os.path.join(self.util_path, utility_name)) util_path_parts = os.path.split(util_path) parts = os.path.splitext(util_path_parts[len(util_path_parts) - 1]) # filter extensions exts = ['.py', '.exe', '', 'pyc'] if (parts[0] not in _EXCLUDE_UTILS and (len(parts) == 1 or (len(parts) == 2 and parts[1] in exts))): util_name = str(parts[0]) file_ext = parts[1] command = "{0}{1}".format(util_name, file_ext) util_path = self.util_path utility_path = command if not os.path.exists(command): utility_path = os.path.join(util_path, utility_name) # Now try the extensions if not os.path.exists(utility_path): if file_ext: utility_path = "{0}{1}".format(utility_path, file_ext) else: for ext in exts: try_path = "{0}{1}".format(utility_path, ext) if os.path.exists(try_path): utility_path = try_path if not os.path.exists(utility_path): print("WARNING: Unable to locate utility {0}." "".format(utility_name)) print(WARNING_FAIL_TO_READ_OPTIONS.format(util_name)) return # Check for running against .exe if utility_path.endswith(".exe"): cmd = [] # Not using .exe else: cmd = [sys.executable] cmd.extend([utility_path]) self.util_cmd_dict[utility_name] = tuple(cmd) def find_utilities(self, this_utils=None): """ Locate the utility scripts this_utils[in] list of utilities to find, default None to find all. This method builds a list of utilities. """ if not this_utils: # Not utilities name to find was passed, find help for all those # utilities not previously found in a previos call. utils = self.available_utilities working_utils = [util['name'] for util in self.util_list] if not len(working_utils) < len(self.util_list): utils = [name for name in utils if name not in working_utils] if len(utils) < 1: return else: # utilities name given to find for, find help for all these which # was not previously found in a previos call. working_utils = [util['name'] for util in self.util_list] utils = [util for util in this_utils if util not in working_utils] if len(utils) < 1: return # Execute the utility command using get_util_info() # that returns --help partially parsed. for util_name in utils: if util_name in self.util_cmd_dict: cmd = self.util_cmd_dict.pop(util_name) util_info = self.get_util_info(list(cmd), util_name) if util_info and util_info["usage"]: util_info["cmd"] = tuple(cmd) self.util_list.append(util_info) working_utils.append(util_name) self.util_list.sort(key=lambda util_list: util_list['name']) def get_util_info(self, cmd, util_name): """Get information about utility cmd[in] a list with the elements that conform the command to invoke the utility util_name[in] name of utility to get information Returns dictionary - name, description, usage, options """ cmd.extend(["--help"]) # rmv print('executing ==> {0}'.format(cmd)) try: proc = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout_temp, stderr_temp = proc.communicate() returncode = proc.returncode except OSError: # always OS error if not found. # No such file or directory stdout_temp = "" returncode = 0 # Parse the help output and save the information found usage = None description = None if stderr_temp or returncode: print(WARNING_FAIL_TO_READ_OPTIONS.format(util_name)) if stderr_temp: print("The execution of the command returned: {0}" "".format(stderr_temp)) else: print("UNKNOWN. To diagnose, exit mysqluc and attempt the " "command: {0} --help".format(util_name)) return None res = self.program_usage.match(stdout_temp.replace("\r", "")) if not res: print(WARNING_FAIL_TO_READ_OPTIONS.format(util_name)) print("An error occurred while trying to parse the options " "from the utility") return None else: usage = res.group("Usage").replace("\n", "") desc_clean = res.group("Description").replace("\n", " ").split() description = (" ".join(desc_clean)) + " " # standardize string. Options = res.group("Options") + "\n -" # Create dictionary for the information utility_data = { 'name': util_name, 'description': description, 'usage': usage, 'options': Options } return utility_data def parse_all_options(self, utility): """ Parses all options for the given utility. utility[inout] that contains the options info to parse """ options_info = utility['options'] if isinstance(options_info, list): # nothing to do if it is a list. return options = [] res = self.program_options.findall(options_info) for opt in res: option = {} name = self.program_option.search(opt[0] + " ") if name: option['name'] = str(name.group(1)) alias = self.program_name.search(opt[0] + " ") if alias: option['alias'] = str(alias.group(1)) else: option['alias'] = None desc_clean = opt[1].replace("\n", " ").split() option['description'] = " ".join(desc_clean) option['long_name'] = option['name'] parts = option['name'].split('=') option['req_value'] = len(parts) == 2 if option['req_value']: option['name'] = parts[0] if option: options.append(option) utility['options'] = options def get_util_matches(self, util_prefix): """Get list of utilities that match a prefix util_prefix[in] prefix for name of utility Returns dictionary entry for utility based on matching first n chars """ matches = [] if not util_prefix.lower().startswith('mysql'): util_prefix = 'mysql' + util_prefix for util in self.available_utilities: if util[0:len(util_prefix)].lower() == util_prefix: matches.append(util) # make sure the utilities description has been found for the matches. self.find_utilities(matches) matches = [util for util in self.util_list if util['name'] in matches] return matches def get_option_matches(self, util_info, option_prefix, find_alias=False): """Get list of option dictionary entries for options that match the prefix. util_info[in] utility information option_prefix[in] prefix for option name find_alias[in] if True, match alias (default = False) Returns list of dictionary items that match prefix """ # Check type of util_info if util_info is None or util_info == {} or \ not isinstance(util_info, dict): raise UtilError("Empty or invalide utility dictionary.") matches = [] stop = len(option_prefix) if isinstance(util_info['options'], str): self.parse_all_options(util_info) for option in util_info['options']: if option is None: continue name = option.get('name', None) if name is None: continue if find_alias: if option.get('alias', '') == option_prefix: matches.append(option) else: if name[0:stop] == option_prefix: matches.append(option) return matches def show_utilities(self, print_list=None): """Show list of utilities as a 2-column list. print_list[in] list of utilities to print - default is None which means print all utilities """ if print_list is None: if len(self.util_list) != len(self.available_utilities): self.find_utilities() list_of_utilities = self.util_list else: list_of_utilities = print_list print if len(list_of_utilities) > 0: print_dictionary_list(['Utility', 'Description'], ['name', 'description'], list_of_utilities, self.width) else: print print "No utilities match the search term." print def get_options_dictionary(self, utility_options): """Retrieve the options dictionary. This method builds a new dictionary that contains the options for the utilities read. utility_options[in] list of options for utilities or the utility. Return dictionary - list of options for all utilities. """ dictionary_list = [] if isinstance(utility_options, dict): if isinstance(utility_options['options'], str): # options had not been parsed yet self.parse_all_options(utility_options) options = utility_options['options'] else: options = utility_options for option in options: name = option.get('long_name', '') if len(name) == 0: continue name = '--' + name alias = option.get('alias', None) if alias is not None: name = '-' + alias + ", " + name item = { 'long_name': name, 'description': option.get('description', '') } dictionary_list.append(item) return dictionary_list def show_options(self, options): """Show list of options for a utility by name. options[in] structure containing the options This method displays a list of the options and their descriptions for the given utility. """ if len(options) > 0: dictionary_list = self.get_options_dictionary(options) print print print_dictionary_list(['Option', 'Description'], ['long_name', 'description'], dictionary_list, self.width) print @staticmethod def get_usage(util_info): """Get the usage statement for the utility util_info[in] dictionary entry for utility information Returns string usage statement """ # Check type of util_info if util_info is None or util_info == {} or \ not isinstance(util_info, dict): return False return util_info['usage'] def kill_process(pid, force=False, silent=False): """This function tries to kill the given subprocess. pid [in] Process id of the subprocess to kill. force [in] Boolean value, if False try to kill process with SIGTERM (Posix only) else kill it forcefully. silent[in] If true, do no print message Returns True if operation was successful and False otherwise. """ res = True if os.name == "posix": if force: os.kill(pid, subprocess.signal.SIGABRT) else: os.kill(pid, subprocess.signal.SIGTERM) else: with open(os.devnull, 'w') as f_out: ret_code = subprocess.call("taskkill /F /T /PID {0}".format(pid), shell=True, stdout=f_out, stdin=f_out) if ret_code not in (0, 128): res = False if not silent: print("Unable to successfully kill process with PID " "{0}".format(pid)) return res mysql-utilities-1.6.4/mysql/utilities/common/options_parser.py0000644001577100752670000002443512747670311024473 0ustar pb2usercommon# # Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This module contains the MySQLOptionsParser used to read the MySQL configuration files. This module belongs to Connector python, and it should be removed once C/py v2.0.0 is released and in the meanwhile will be used from here. """ import codecs import io import os import re from ConfigParser import SafeConfigParser, MissingSectionHeaderError DEFAULT_OPTION_FILES = { 'nt': 'C:\\my.ini', 'posix': '/etc/mysql/my.cnf' } DEFAULT_EXTENSIONS = { 'nt': ('ini', 'cnf'), 'posix': 'cnf' } class MySQLOptionsParser(SafeConfigParser): """This class implements methods to parse MySQL option files""" def __init__(self, files=None, keep_dashes=True): """Initialize files[in] The files to parse searching for configuration items. keep_dashes[in] If False, dashes in options are replaced with underscores. Raises ValueError if defaults is set to True but defaults files cannot be found. """ # Regular expression to allow options with no value(For Python v2.6) self.OPTCRE = re.compile( # pylint: disable=C0103 r'(?P
). output_format[in] Output format to export data. return a string with the generated file name. """ # Store result of table export to a separated file. if output_format == 'sql': return "{0}.sql".format(table_name) else: return "{0}.{1}".format(table_name, output_format.lower()) def get_copy_lock(server, db_list, options, include_mysql=False, cloning=False): """Get an instance of the Lock class with a standard copy (read) lock This method creates an instance of the Lock class using the lock type specified in the options. It is used to initiate the locks for the copy and related operations. server[in] Server instance for locking calls db_list[in] list of database names options[in] option dictionary Must include the skip_* options for copy and export include_mysql[in] if True, include the mysql tables for copy operation cloning[in] if True, create lock tables with WRITE on dest db Default = False Returns Lock - Lock class instance """ rpl_mode = options.get("rpl_mode", None) locking = options.get('locking', 'snapshot') # Determine if we need to use FTWRL. There are two conditions: # - running on master (rpl_mode = 'master') # - using locking = 'lock-all' and rpl_mode present if (rpl_mode in ["master", "both"]) or \ (rpl_mode and locking == 'lock-all'): new_opts = options.copy() new_opts['locking'] = 'flush' lock = Lock(server, [], new_opts) # if this is a lock-all type and not replication operation, # find all tables and lock them elif locking == 'lock-all': table_lock_list = [] # Build table lock list for db_name in db_list: db = db_name[0] if type(db_name) == tuple else db_name source_db = Database(server, db) tables = source_db.get_db_objects("TABLE") for table in tables: table_lock_list.append(("{0}.{1}".format(db, table[0]), 'READ')) # Cloning requires issuing WRITE locks because we use same # conn. # Non-cloning will issue WRITE lock on a new destination conn. if cloning: if db_name[1] is None: db_clone = db_name[0] else: db_clone = db_name[1] # For cloning, we use the same connection so we need to # lock the destination tables with WRITE. table_lock_list.append(("{0}.{1}".format(db_clone, table[0]), 'WRITE')) # We must include views for server version 5.5.3 and higher if server.check_version_compat(5, 5, 3): tables = source_db.get_db_objects("VIEW") for table in tables: table_lock_list.append(("{0}.{1}".format(db, table[0]), 'READ')) # Cloning requires issuing WRITE locks because we use same # conn. # Non-cloning will issue WRITE lock on a new destination # conn. if cloning: if db_name[1] is None: db_clone = db_name[0] else: db_clone = db_name[1] # For cloning, we use the same connection so we need to # lock the destination tables with WRITE. table_lock_list.append(("{0}.{1}".format(db_clone, table[0]), 'WRITE')) # Now add mysql tables if include_mysql: # Don't lock proc tables if no procs of funcs are being read if not options.get('skip_procs', False) and \ not options.get('skip_funcs', False): table_lock_list.append(("mysql.proc", 'READ')) table_lock_list.append(("mysql.procs_priv", 'READ')) # Don't lock event table if events are skipped if not options.get('skip_events', False): table_lock_list.append(("mysql.event", 'READ')) lock = Lock(server, table_lock_list, options) # Use default or no locking option else: lock = Lock(server, [], options) return lock def get_change_master_command(source, options): """Get the CHANGE MASTER command for export or copy of databases This method creates the replication commands based on the options chosen. This includes the stop and start slave commands as well as the change master command as follows. To create the CHANGE MASTER command for connecting to the existing server as the master, set rpl_mode = 'master'. To create the CHANGE MASTER command for using the existing server as the master, set rpl_mode = 'master'. You can also get both CHANGE MASTER commands by setting rpl_mode = 'both'. In this case, the second change master command (rpl_mode = 'slave') will be commented out. The method also checks the rpl_file option. If a file name is provided, it is checked to see if file exists or the user does not have access, an error is thrown. If no file is provided, the method writes the commands to stdout. The user may also comment the replication commands by specifying the comment_rpl option (True = comment). The method calls the negotiate_rpl_connection method of the replication module to create the CHANGE MASTER command. Additional error checking is performed in that method as follows. See the negotiate_rpl_connection method documentation for complete specifics. - binary log must be ON for a master - the rpl_user must exist source[in] Server instance options[in] option dictionary Returns tuple - CHANGE MASTER command[s], output file for writing commands. """ if options is None: options = {} rpl_file = None rpl_cmds = [] rpl_filename = options.get("rpl_file", "") rpl_mode = options.get("rpl_mode", "master") quiet = options.get("quiet", False) # Check for rpl_filename and create file. if rpl_filename: rpl_file = rpl_filename try: rf = open(rpl_filename, "w") except: raise UtilError("File inaccessible or bad path: " "{0}".format(rpl_filename)) rf.write("# Replication Commands:\n") rf.close() strict = rpl_mode == 'both' or options.get("strict", False) # Get change master as if this server was a master if rpl_mode in ["master", "both"]: if not quiet: rpl_cmds.append("# Connecting to the current server as master") change_master = negotiate_rpl_connection(source, True, strict, options) rpl_cmds.extend(change_master) # Get change master using this slave's master information if rpl_mode in ["slave", "both"]: if not quiet: rpl_cmds.append("# Connecting to the current server's master") change_master = negotiate_rpl_connection(source, False, strict, options) rpl_cmds.extend(change_master) return rpl_cmds, rpl_file def get_gtid_commands(master): """Get the GTID commands for beginning and ending operations This method returns those commands needed at the start of an export/copy operation (turn off session binlog, setting GTIDs) and those needed at the end of an export/copy operation (turn on binlog session). master[in] Master connection information Returns tuple - ([],"") = list of commands for start, command for end or None if GTIDs are not enabled. """ if not master.supports_gtid() == "ON": return None rows = master.exec_query(_GET_GTID_EXECUTED) master_gtids_list = ["%s" % row[0] for row in rows] master_gtids = ",".join(master_gtids_list) if len(master_gtids_list) == 1 and rows[0][0] == '': return None return ([_SESSION_BINLOG_OFF1, _SESSION_BINLOG_OFF2, _SET_GTID_PURGED.format(master_gtids)], _SESSION_BINLOG_ON) def write_commands(target_file, rows, options, extra_linespacing=False, comment=False, comment_prefix="#"): """Write commands to file or stdout This method writes the rows passed to either a file specified in the rpl_file option or stdout if no file is specified. file[in] filename to use or None for sys.stdout rows[in] rows to write options[in] replication options """ quiet = options.get("quiet", False) verbosity = options.get("verbosity", 0) # Open the file for append if target_file: out_file = target_file else: out_file = sys.stdout if extra_linespacing and not quiet and verbosity: out_file.write("#\n") # Write rows. for row in rows: if comment: if row.startswith(comment_prefix): # Row already start with comment prefix, no need to add it. out_file.write("{0}\n".format(row)) else: out_file.write("{0} {1}\n".format(comment_prefix, row)) else: out_file.write("{0}\n".format(row)) if extra_linespacing and not quiet and verbosity: out_file.write("#\n") def multiprocess_db_export_task(export_db_task): """Multiprocess export database method. This method wraps the export_database method to allow its concurrent execution by a pool of processes. export_db_task[in] dictionary of values required by a process to perform the database export task, namely: {'srv_con': , 'db_list': , 'options': , } """ # Get input values to execute task. srv_con_values = export_db_task.get('srv_con') db_list = export_db_task.get('db_list') options = export_db_task.get('options') # Create temporay file to hold export data. outfile = tempfile.NamedTemporaryFile(delete=False) # Execute export databases task. # NOTE: Must handle any exception here, because worker processes will not # propagate them to the main process. try: export_databases(srv_con_values, db_list, outfile, options) return outfile.name except UtilError: _, err, _ = sys.exc_info() print("ERROR: {0}".format(err.errmsg)) except Exception: _, err, _ = sys.exc_info() print("UNEXPECTED ERROR: {0}".format(err.errmsg)) def multiprocess_tbl_export_task(export_tbl_task): """Multiprocess export table data method. This method wraps the table data export to allow its concurrent execution by a pool of processes. export_tbl_task[in] dictionary of values required by a process to perform the table export task, namely: {'srv_con': , 'table':
, 'options': , } """ # Get input to execute task. source_srv = export_tbl_task.get('srv_con') table = export_tbl_task.get('table') options = export_tbl_task.get('options') # Execute export table task. # NOTE: Must handle any exception here, because worker processes will not # propagate them to the main process. try: return _export_table_data(source_srv, table, None, options) except UtilError: _, err, _ = sys.exc_info() print("ERROR exporting data for table '{0}': {1}".format(table, err.errmsg)) def export_databases(server_values, db_list, output_file, options): """Export one or more databases This method performs the export of a list of databases first dumping the definitions then the data. It supports dumping replication commands (STOP SLAVE, CHANGE MASTER, START SLAVE) for exporting data for use in replication scenarios. server_values[in] server connection value dictionary. db_list[in] list of database names. output_file[in] file to store export output. options[in] option dictionary. Note: Must include the skip_* options for export. """ fkeys_present = False export = options.get("export", "definitions") rpl_mode = options.get("rpl_mode", "master") quiet = options.get("quiet", False) skip_gtids = options.get("skip_gtid", False) # default: generate GTIDs skip_fkeys = options.get("skip_fkeys", False) # default: gen fkeys stmts conn_options = { 'quiet': quiet, 'version': "5.1.30", } servers = connect_servers(server_values, None, conn_options) source = servers[0] # Retrieve all databases, if --all is used. if options.get("all", False): rows = source.get_all_databases() for row in rows: if row[0] not in db_list: db_list.append(row[0]) # Check user permissions on source server for all databases. check_read_permissions(source, db_list, options) # Check for GTID support supports_gtid = servers[0].supports_gtid() if not skip_gtids and not supports_gtid == 'ON': skip_gtids = True elif skip_gtids and supports_gtid == 'ON': output_file.write(_GTID_WARNING) if not skip_gtids and supports_gtid == 'ON': # Check GTID version for complete feature support servers[0].check_gtid_version() warning_printed = False # Check to see if this is a full export (complete backup) all_dbs = servers[0].exec_query("SHOW DATABASES") for db in all_dbs: if warning_printed: continue # Internal databases 'sys' added by default for MySQL 5.7.7+. if db[0].upper() in ["MYSQL", "INFORMATION_SCHEMA", "PERFORMANCE_SCHEMA", "SYS"]: continue if not db[0] in db_list: output_file.write(_GTID_BACKUP_WARNING) warning_printed = True # Check for existence of foreign keys fkeys_enabled = servers[0].foreign_key_checks_enabled() if fkeys_enabled and skip_fkeys: output_file.write("# WARNING: Output contains tables with foreign key " "contraints. You should disable foreign key checks " "prior to importing this stream.\n") elif fkeys_enabled and db_list: db_name_list = ["'{0}'".format(db) for db in db_list] res = source.exec_query(_FKEYS.format(",".join(db_name_list))) if res and res[0]: fkeys_present = True write_commands(output_file, [_FKEYS_SWITCH.format("0")], options, True) # Lock tables first my_lock = get_copy_lock(source, db_list, options, True) # Determine comment prefix for rpl commands. rpl_cmt_prefix = "" rpl_cmt = False if options.get("comment_rpl", False) or rpl_mode == "both": rpl_cmt_prefix = "#" rpl_cmt = True if options.get("format", "sql") != 'sql': rpl_cmt_prefix = _RPL_PREFIX rpl_cmt = True # if --rpl specified, write initial replication command rpl_info = None rpl_file = None if rpl_mode: rpl_info = get_change_master_command(source, options) if rpl_info[_RPL_FILE]: rpl_file = open(rpl_info[_RPL_FILE], 'w') else: rpl_file = output_file write_commands(rpl_file, ["STOP SLAVE;"], options, True, rpl_cmt, rpl_cmt_prefix) # if GTIDs enabled and user requested the output, write the GTID commands if skip_gtids: gtid_info = None else: gtid_info = get_gtid_commands(source) if gtid_info: write_commands(output_file, gtid_info[0], options, True, rpl_cmt, rpl_cmt_prefix) # dump metadata if export in ("definitions", "both"): _export_metadata(source, db_list, output_file, options) # dump data if export in ("data", "both"): if options.get("display", "brief") != "brief": output_file.write( "# NOTE : --display is ignored for data export.\n" ) _export_data(source, server_values, db_list, output_file, options) # if GTIDs enabled, write the GTID-related commands if gtid_info: write_commands(output_file, [gtid_info[1]], options, True, rpl_cmt, rpl_cmt_prefix) # if --rpl specified, write replication end command if rpl_mode: write_commands(rpl_file, rpl_info[_RPL_COMMANDS], options, True, rpl_cmt, rpl_cmt_prefix) write_commands(rpl_file, ["START SLAVE;"], options, True, rpl_cmt, rpl_cmt_prefix) # Last command wrote rpl_file, close it. if rpl_info[_RPL_FILE]: rpl_file.close() my_lock.unlock() if fkeys_present and fkeys_enabled and not skip_fkeys: write_commands(output_file, [_FKEYS_SWITCH.format("1")], options, True) mysql-utilities-1.6.4/mysql/utilities/command/rpl_admin.py0000644001577100752670000012623512747670311023520 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the replication administration tools for managine a simple master-to-slaves topology. """ import logging import os import sys import time from datetime import datetime, timedelta from mysql.utilities.exception import UtilRplError from mysql.utilities.common.gtid import gtid_set_itemize from mysql.utilities.common.ip_parser import hostname_is_ip from mysql.utilities.common.messages import (ERROR_SAME_MASTER, ERROR_USER_WITHOUT_PRIVILEGES, HOST_IP_WARNING, EXTERNAL_SCRIPT_DOES_NOT_EXIST, INSUFFICIENT_FILE_PERMISSIONS) from mysql.utilities.common.tools import ping_host, execute_script from mysql.utilities.common.format import print_list from mysql.utilities.common.topology import Topology from mysql.utilities.command.failover_console import FailoverConsole from mysql.utilities.command.failover_daemon import FailoverDaemon _VALID_COMMANDS_TEXT = """ Available Commands: elect - perform best slave election and report best slave failover - conduct failover from master to best slave gtid - show status of global transaction id variables also displays uuids for all servers health - display the replication health reset - stop and reset all slaves start - start all slaves stop - stop all slaves switchover - perform slave promotion Note: elect, gtid and health require --master and either --slaves or --discover-slaves-login; failover requires --slaves; switchover requires --master, --new-master and either --slaves or --discover-slaves-login; start, stop and reset require --slaves (and --master is optional) """ _VALID_COMMANDS = ["elect", "failover", "gtid", "health", "reset", "start", "stop", "switchover"] _SLAVE_COMMANDS = ["reset", "start", "stop"] _MASTER_COLS = ["Host", "Port", "Binary Log File", "Position"] _SLAVE_COLS = ["Host", "Port", "Master Log File", "Position", "Seconds Behind"] _GTID_COLS = ["host", "port", "role", "gtid"] _FAILOVER_ERROR = "%sCheck server for errors and run the mysqlrpladmin " + \ "utility to perform manual failover." _FAILOVER_ERRNO = 911 _DATE_FORMAT = '%Y-%m-%d %H:%M:%S %p' _DATE_LEN = 22 _ERRANT_TNX_ERROR = "Errant transaction(s) found on slave(s)." _GTID_ON_REQ = "{action} requires GTID_MODE=ON for all servers." WARNING_SLEEP_TIME = 10 def get_valid_rpl_command_text(): """Provide list of valid command descriptions to caller. """ return _VALID_COMMANDS_TEXT def get_valid_rpl_commands(): """Provide list of valid commands to caller. """ return _VALID_COMMANDS def purge_log(filename, age): """Purge old log entries This method deletes rows from the log file older than the age specified in days. filename[in] filename of log fil age[in] age in days Returns bool - True = success, Fail = error reading/writing log file """ if not os.path.exists(filename): print "NOTE: Log file '%s' does not exist. Will be created." % filename return True # Read a row, check age. If > today + age, delete row. # Ignore user markups and other miscellaneous entries. try: log = open(filename, "r") log_entries = log.readlines() log.close() threshold = datetime.now() - timedelta(days=age) start = 0 for row in log_entries: # Check age here try: row_time = time.strptime(row[0:_DATE_LEN], _DATE_FORMAT) row_age = datetime(*row_time[:6]) if row_age < threshold: start += 1 elif start == 0: return True else: break except: start += 1 # Remove invalid formatted lines log = open(filename, "w") log.writelines(log_entries[start:]) log.close() except: return False return True def skip_slaves_trx(gtid_set, slaves_cnx_val, options): """Skip transactions on slaves. This method skips the given transactions (GTID set) on all the specified slaves. That is, an empty transaction is injected for each GTID in the given set for one of each slaves. In case a slave already has an executed transaction for a given GTID then that GTID is ignored for this slave. gtid_set[in] String representing the set of GTIDs to skip. slaves_cnx_val[in] List of the dictionaries with the connection values for each target slave. options[in] Dictionary of options (dry_run, verbosity). Throws an UtilError exception if an error occurs during the execution. """ verbosity = options.get('verbosity') dryrun = options.get('dry_run') # Connect to slaves. rpl_topology = Topology(None, slaves_cnx_val, options) # Check required privileges. errors = rpl_topology.check_privileges(skip_master=True) if errors: err_details = '' for err in errors: err_msg = ERROR_USER_WITHOUT_PRIVILEGES.format( user=err[0], host=err[1], port=err[2], operation='inject empty transactions', req_privileges=err[3]) err_details = '{0}{1}\n'.format(err_details, err_msg) err_details.strip() raise UtilRplError("Not enough privileges.\n{0}".format(err_details)) # GTID must be enabled on all servers. srv_list = rpl_topology.get_servers_with_gtid_not_on() if srv_list: if verbosity: print("# Slaves with GTID not enabled:") for srv in srv_list: msg = "# - GTID_MODE={0} on {1}:{2}".format(srv[2], srv[0], srv[1]) print(msg) raise UtilRplError(_GTID_ON_REQ.format(action='Transaction skip')) if dryrun: print("#") print("# WARNING: Executing utility in dry run mode (read only).") # Get GTID set that can be skipped, i.e., not in GTID_EXECUTED. gtids_by_slave = rpl_topology.slaves_gtid_subtract_executed(gtid_set) # Output GTID set that will be skipped. print("#") print("# GTID set to be skipped for each server:") has_gtid_to_skip = False for host, port, gtids_to_skip in gtids_by_slave: if not gtids_to_skip: gtids_to_skip = 'None' else: # Set flag to indicate that there is at least one GTID to skip. has_gtid_to_skip = True print("# - {0}@{1}: {2}".format(host, port, gtids_to_skip)) # Create dictionary to directly access the slaves instances. slaves_dict = rpl_topology.get_slaves_dict() # Skip transactions for the given list of slaves. print("#") if has_gtid_to_skip: for host, port, gtids_to_skip in gtids_by_slave: if gtids_to_skip: # Decompose GTID set into a list of single transactions. gtid_items = gtid_set_itemize(gtids_to_skip) dryrun_mark = '(dry run) ' if dryrun else '' print("# {0}Injecting empty transactions for '{1}:{2}'" "...".format(dryrun_mark, host, port)) slave_key = '{0}@{1}'.format(host, port) slave_srv = slaves_dict[slave_key]['instance'] for uuid, trx_list in gtid_items: for trx_num in trx_list: trx_to_skip = '{0}:{1}'.format(uuid, trx_num) if verbosity: print("# - {0}".format(trx_to_skip)) if not dryrun: # Inject empty transaction. slave_srv.inject_empty_trx( trx_to_skip, gtid_next_automatic=False) if not dryrun: slave_srv.set_gtid_next_automatic() else: print("# No transaction to skip.") print("#\n#...done.\n#") class RplCommands(object): """Replication commands. This class supports the following replication commands. elect - perform best slave election and report best slave failover - conduct failover from master to best slave as specified by the user. This option performs best slave election. gtid - show status of global transaction id variables health - display the replication health reset - stop and reset all slaves start - start all slaves stop - stop all slaves switchover - perform slave promotion as specified by the user to a specific slave. Requires --master and the --candidate options. """ def __init__(self, master_vals, slave_vals, options, skip_conn_err=True): """Constructor master_vals[in] master server connection dictionary slave_vals[in] list of slave server connection dictionaries options[in] options dictionary skip_conn_err[in] if True, do not fail on connection failure Default = True """ # A sys.stdout copy, that can be used later to turn on/off stdout self.stdout_copy = sys.stdout self.stdout_devnull = open(os.devnull, "w") # Disable stdout when running --daemon with start, stop or restart daemon = options.get("daemon") if daemon: if daemon in ("start", "nodetach"): print("Starting failover daemon...") elif daemon == "stop": print("Stopping failover daemon...") else: print("Restarting failover daemon...") # Disable stdout if daemon not nodetach if daemon != "nodetach": sys.stdout = self.stdout_devnull self.master = None self.master_vals = master_vals self.options = options self.quiet = self.options.get("quiet", False) self.logging = self.options.get("logging", False) self.candidates = self.options.get("candidates", None) self.verbose = self.options.get("verbose", None) self.rpl_user = self.options.get("rpl_user", None) self.ssl_ca = options.get("ssl_ca", None) self.ssl_cert = options.get("ssl_cert", None) self.ssl_key = options.get("ssl_key", None) if self.ssl_ca or self.ssl_cert or self.ssl_key: self.ssl = True try: self.topology = Topology(master_vals, slave_vals, self.options, skip_conn_err) except Exception as err: if daemon and daemon != "nodetach": # Turn on sys.stdout sys.stdout = self.stdout_copy raise UtilRplError(str(err)) def _report(self, message, level=logging.INFO, print_msg=True): """Log message if logging is on This method will log the message presented if the log is turned on. Specifically, if options['log_file'] is not None. It will also print the message to stdout. message[in] message to be printed level[in] level of message to log. Default = INFO print_msg[in] if True, print the message to stdout. Default = True """ # First, print the message. if print_msg and not self.quiet: print message # Now log message if logging turned on if self.logging: logging.log(int(level), message.strip("#").strip(' ')) def _show_health(self): """Run a command on a list of slaves. This method will display the replication health of the topology. This includes the following for each server. - host : host name - port : connection port - role : "MASTER" or "SLAVE" - state : UP = connected, WARN = cannot connect but can ping, DOWN = cannot connect nor ping - gtid : ON = gtid supported and turned on, OFF = supported but not enabled, NO = not supported - rpl_health : (master) binlog enabled, (slave) IO tread is running, SQL thread is running, no errors, slave delay < max_delay, read log pos + max_position < master's log position Note: Will show 'ERROR' if there are multiple errors encountered otherwise will display the health check that failed. If verbosity is set, it will show the following additional information. (master) - server version, binary log file, position (slaves) - server version, master's binary log file, master's log position, IO_Thread, SQL_Thread, Secs_Behind, Remaining_Delay, IO_Error_Num, IO_Error """ fmt = self.options.get("format", "grid") quiet = self.options.get("quiet", False) cols, rows = self.topology.get_health() if not quiet: print "#" print "# Replication Topology Health:" # Print health report print_list(sys.stdout, fmt, cols, rows) return def _show_gtid_data(self): """Display the GTID lists from the servers. This method displays the three GTID lists for all of the servers. Each server is listed with its entries in each list. If a list has no entries, that list is not printed. """ if not self.topology.gtid_enabled(): self._report("# WARNING: GTIDs are not supported on this " "topology.", logging.WARN) return fmt = self.options.get("format", "grid") # Get UUIDs uuids = self.topology.get_server_uuids() if len(uuids): print "#" print "# UUIDS for all servers:" print_list(sys.stdout, fmt, ['host', 'port', 'role', 'uuid'], uuids) # Get GTID lists executed, purged, owned = self.topology.get_gtid_data() if len(executed): print "#" print "# Transactions executed on the server:" print_list(sys.stdout, fmt, _GTID_COLS, executed) if len(purged): print "#" print "# Transactions purged from the server:" print_list(sys.stdout, fmt, _GTID_COLS, purged) if len(owned): print "#" print "# Transactions owned by another server:" print_list(sys.stdout, fmt, _GTID_COLS, owned) def _check_host_references(self): """Check to see if using all host or all IP addresses Returns bool - True = all references are consistent """ uses_ip = hostname_is_ip(self.topology.master.host) for slave_dict in self.topology.slaves: slave = slave_dict['instance'] if slave is not None: host_port = slave.get_master_host_port() host = None if host_port: host = host_port[0] if (not host or uses_ip != hostname_is_ip(slave.host) or uses_ip != hostname_is_ip(host)): return False return True def _switchover(self): """Perform switchover from master to candidate slave This method switches the role of master to a candidate slave. The candidate is specified via the --candidate option. Returns bool - True = no errors, False = errors reported. """ # Check new master is not actual master - need valid candidate candidate = self.options.get("new_master", None) if (self.topology.master.is_alias(candidate['host']) and self.master_vals['port'] == candidate['port']): err_msg = ERROR_SAME_MASTER.format(candidate['host'], candidate['port'], self.master_vals['host'], self.master_vals['port']) self._report(err_msg, logging.WARN) self._report(err_msg, logging.CRITICAL) raise UtilRplError(err_msg) # Check for --master-info-repository=TABLE if rpl_user is None if not self._check_master_info_type(): return False # Check for mixing IP and hostnames if not self._check_host_references(): print("# WARNING: {0}".format(HOST_IP_WARNING)) self._report(HOST_IP_WARNING, logging.WARN, False) # Check prerequisites if candidate is None: msg = "No candidate specified." self._report(msg, logging.CRITICAL) raise UtilRplError(msg) # Can only check errant transactions if GTIDs are enabled. if self.topology.gtid_enabled(): # Check existence of errant transactions on slaves errant_tnx = self.topology.find_errant_transactions() if errant_tnx: force = self.options.get('force') print("# ERROR: {0}".format(_ERRANT_TNX_ERROR)) self._report(_ERRANT_TNX_ERROR, logging.ERROR, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) print("# {0}".format(errant_msg)) self._report(errant_msg, logging.ERROR, False) # Raise an exception (to stop) if tolerant mode is OFF if not force: raise UtilRplError("{0} Note: If you want to ignore this " "issue, although not advised, please " "use the utility with the --force " "option.".format(_ERRANT_TNX_ERROR)) else: warn_msg = ("Errant transactions check skipped (GTID not enabled " "for the whole topology).") print("# WARNING: {0}".format(warn_msg)) self._report(warn_msg, logging.WARN, False) self._report(" ".join(["# Performing switchover from master at", "%s:%s" % (self.master_vals['host'], self.master_vals['port']), "to slave at %s:%s." % (candidate['host'], candidate['port'])])) if not self.topology.switchover(candidate): self._report("# Errors found. Switchover aborted.", logging.ERROR) return False return True def _elect_slave(self): """Perform best slave election This method determines which slave is the best candidate for GTID-enabled failover. If called for a non-GTID topology, a warning is issued. """ if not self.topology.gtid_enabled(): warn_msg = _GTID_ON_REQ.format(action='Slave election') print("# WARNING: {0}".format(warn_msg)) self._report(warn_msg, logging.WARN, False) return # Check for mixing IP and hostnames if not self._check_host_references(): print("# WARNING: {0}".format(HOST_IP_WARNING)) self._report(HOST_IP_WARNING, logging.WARN, False) candidates = self.options.get("candidates", None) if candidates is None or len(candidates) == 0: self._report("# Electing candidate slave from known slaves.") else: self._report("# Electing candidate slave from candidate list " "then slaves list.") best_slave = self.topology.find_best_slave(candidates) if best_slave is None: self._report("ERROR: No slave found that meets eligilibility " "requirements.", logging.ERROR) return self._report("# Best slave found is located on %s:%s." % (best_slave['host'], best_slave['port'])) def _failover(self, strict=False, options=None): """Perform failover This method executes GTID-enabled failover. If called for a non-GTID topology, a warning is issued. strict[in] if True, use only the candidate list for slave election and fail if no candidates are viable. Default = False options[in] options dictionary. Returns bool - True = failover succeeded, False = errors found """ if options is None: options = {} srv_list = self.topology.get_servers_with_gtid_not_on() if srv_list: err_msg = _GTID_ON_REQ.format(action='Slave election') print("# ERROR: {0}".format(err_msg)) self._report(err_msg, logging.ERROR, False) for srv in srv_list: msg = "# - GTID_MODE={0} on {1}:{2}".format(srv[2], srv[0], srv[1]) self._report(msg, logging.ERROR) self._report(err_msg, logging.CRITICAL, False) raise UtilRplError(err_msg) # Check for --master-info-repository=TABLE if rpl_user is None if not self._check_master_info_type(): return False # Check existence of errant transactions on slaves errant_tnx = self.topology.find_errant_transactions() if errant_tnx: force = options.get('force') print("# ERROR: {0}".format(_ERRANT_TNX_ERROR)) self._report(_ERRANT_TNX_ERROR, logging.ERROR, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) print("# {0}".format(errant_msg)) self._report(errant_msg, logging.ERROR, False) # Raise an exception (to stop) if tolerant mode is OFF if not force: raise UtilRplError("{0} Note: If you want to ignore this " "issue, although not advised, please use " "the utility with the --force option." "".format(_ERRANT_TNX_ERROR)) self._report("# Performing failover.") if not self.topology.failover(self.candidates, strict, stop_on_error=True): self._report("# Errors found.", logging.ERROR) return False return True def _check_master_info_type(self, halt=True): """Check for master information set to TABLE if rpl_user not provided halt[in] if True, raise error on failure. Default is True Returns bool - True if rpl_user is specified or False if rpl_user not specified and at least one slave does not have --master-info-repository=TABLE. """ error = "You must specify either the --rpl-user or set all slaves " + \ "to use --master-info-repository=TABLE." # Check for --master-info-repository=TABLE if rpl_user is None if self.rpl_user is None: if not self.topology.check_master_info_type("TABLE"): if halt: raise UtilRplError(error) self._report(error, logging.ERROR) return False return True def check_host_references(self): """Public method to access self.check_host_references() """ return self._check_host_references() def execute_command(self, command, options=None): """Execute a replication admin command This method executes one of the valid replication administration commands as described above. command[in] command to execute options[in] options dictionary. Returns bool - True = success, raise error on failure """ if options is None: options = {} # Raise error if command is not valid if command not in _VALID_COMMANDS: msg = "'%s' is not a valid command." % command self._report(msg, logging.CRITICAL) raise UtilRplError(msg) # Check privileges self._report("# Checking privileges.") full_check = command in ['failover', 'elect', 'switchover'] errors = self.topology.check_privileges(full_check) if len(errors): msg = "User %s on %s does not have sufficient privileges to " + \ "execute the %s command." for error in errors: self._report(msg % (error[0], error[1], command), logging.CRITICAL) raise UtilRplError("Not enough privileges to execute command.") self._report("Executing %s command..." % command, logging.INFO, False) # Execute the command if command in _SLAVE_COMMANDS: if command == 'reset': self.topology.run_cmd_on_slaves('stop') self.topology.run_cmd_on_slaves(command) elif command in 'gtid': self._show_gtid_data() elif command == 'health': self._show_health() elif command == 'switchover': self._switchover() elif command == 'elect': self._elect_slave() elif command == 'failover': self._failover(options=options) else: msg = "Command '%s' is not implemented." % command self._report(msg, logging.CRITICAL) raise UtilRplError(msg) if command in ['switchover', 'failover'] and \ not self.options.get("no_health", False): self._show_health() self._report("# ...done.") return True def auto_failover(self, interval): """Automatic failover Wrapper class for running automatic failover. See run_automatic_failover for details on implementation. This method ensures the registration/deregistration occurs regardless of exception or errors. interval[in] time in seconds to wait to check status of servers Returns bool - True = success, raises exception on error """ failover_mode = self.options.get("failover_mode", "auto") force = self.options.get("force", False) # Initialize a console console = FailoverConsole(self.topology.master, self.topology.get_health, self.topology.get_gtid_data, self.topology.get_server_uuids, self.options) # Check privileges self._report("# Checking privileges.") errors = self.topology.check_privileges(failover_mode != 'fail') if len(errors): for error in errors: msg = ("User {0} on {1}@{2} does not have sufficient " "privileges to execute the {3} command " "(required: {4}).").format(error[0], error[1], error[2], 'failover', error[3]) print("# ERROR: {0}".format(msg)) self._report(msg, logging.CRITICAL, False) raise UtilRplError("Not enough privileges to execute command.") # Unregister existing instances from slaves self._report("Unregistering existing instances from slaves.", logging.INFO, False) console.unregister_slaves(self.topology) # Register instance self._report("Registering instance on master.", logging.INFO, False) old_mode = failover_mode failover_mode = console.register_instance(force) if failover_mode != old_mode: self._report("Multiple instances of failover console found for " "master %s:%s." % (self.topology.master.host, self.topology.master.port), logging.WARN) print "If this is an error, restart the console with --force. " print "Failover mode changed to 'FAIL' for this instance. " print "Console will start in 10 seconds.", sys.stdout.flush() i = 0 while i < 9: time.sleep(1) sys.stdout.write('.') sys.stdout.flush() i += 1 print "starting Console." time.sleep(1) try: res = self.run_auto_failover(console, failover_mode) except: raise finally: try: # Unregister instance self._report("Unregistering instance on master.", logging.INFO, False) console.register_instance(True, False) self._report("Failover console stopped.", logging.INFO, False) except: pass return res def auto_failover_as_daemon(self): """Automatic failover Wrapper class for running automatic failover as daemon. This method ensures the registration/deregistration occurs regardless of exception or errors. Returns bool - True = success, raises exception on error """ # Initialize failover daemon failover_daemon = FailoverDaemon(self) res = None try: action = self.options.get("daemon") if action == "start": res = failover_daemon.start() elif action == "stop": res = failover_daemon.stop() elif action == "restart": res = failover_daemon.restart() else: # Start failover deamon in foreground res = failover_daemon.start(detach_process=False) except: try: # Unregister instance self._report("Unregistering instance on master.", logging.INFO, False) failover_daemon.register_instance(True, False) self._report("Failover daemon stopped.", logging.INFO, False) except: pass return res def run_auto_failover(self, console, failover_mode="auto"): """Run automatic failover This method implements the automatic failover facility. It uses the FailoverConsole class from the failover_console.py to implement all user interface commands and uses the existing failover() method of this class to conduct failover. When the master goes down, the method can perform one of three actions: 1) failover to list of candidates first then slaves 2) failover to list of candidates only 3) fail console[in] instance of the failover console class. Returns bool - True = success, raises exception on error """ pingtime = self.options.get("pingtime", 3) exec_fail = self.options.get("exec_fail", None) post_fail = self.options.get("post_fail", None) pedantic = self.options.get('pedantic', False) fail_retry = self.options.get('fail_retry', None) # Only works for GTID_MODE=ON if not self.topology.gtid_enabled(): msg = "Topology must support global transaction ids " + \ "and have GTID_MODE=ON." self._report(msg, logging.CRITICAL) raise UtilRplError(msg) # Require --master-info-repository=TABLE for all slaves if not self.topology.check_master_info_type("TABLE"): msg = "Failover requires --master-info-repository=TABLE for " + \ "all slaves." self._report(msg, logging.ERROR, False) raise UtilRplError(msg) # Check for mixing IP and hostnames if not self._check_host_references(): print("# WARNING: {0}".format(HOST_IP_WARNING)) self._report(HOST_IP_WARNING, logging.WARN, False) print("#\n# Failover console will start in {0} seconds.".format( WARNING_SLEEP_TIME)) time.sleep(WARNING_SLEEP_TIME) # Check existence of errant transactions on slaves errant_tnx = self.topology.find_errant_transactions() if errant_tnx: print("# WARNING: {0}".format(_ERRANT_TNX_ERROR)) self._report(_ERRANT_TNX_ERROR, logging.WARN, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) print("# {0}".format(errant_msg)) self._report(errant_msg, logging.WARN, False) # Raise an exception (to stop) if pedantic mode is ON if pedantic: raise UtilRplError("{0} Note: If you want to ignore this " "issue, please do not use the --pedantic " "option.".format(_ERRANT_TNX_ERROR)) self._report("Failover console started.", logging.INFO, False) self._report("Failover mode = %s." % failover_mode, logging.INFO, False) # Main loop - loop and fire on interval. done = False first_pass = True failover = False while not done: # Use try block in case master class has gone away. try: old_host = self.master.host old_port = self.master.port except: old_host = "UNKNOWN" old_port = "UNKNOWN" # If a failover script is provided, check it else check master # using connectivity checks. if exec_fail is not None: # Execute failover check script if not os.path.isfile(exec_fail): message = EXTERNAL_SCRIPT_DOES_NOT_EXIST.format( path=exec_fail) self._report(message, logging.CRITICAL, False) raise UtilRplError(message) elif not os.access(exec_fail, os.X_OK): message = INSUFFICIENT_FILE_PERMISSIONS.format( path=exec_fail, permissions='execute') self._report(message, logging.CRITICAL, False) raise UtilRplError(message) else: self._report("# Spawning external script for failover " "checking.") res = execute_script(exec_fail, None, [old_host, old_port], self.verbose) if res == 0: self._report("# Failover check script completed Ok. " "Failover averted.") else: self._report("# Failover check script failed. " "Failover initiated", logging.WARN) failover = True else: # Check the master. If not alive, wait for pingtime seconds # and try again. if self.topology.master is not None and \ not self.topology.master.is_alive(): msg = "Master may be down. Waiting for %s seconds." % \ pingtime self._report(msg, logging.INFO, False) time.sleep(pingtime) try: self.topology.master.connect() except: pass # If user specified a master fail retry, wait for the # predetermined time and attempt to check the master again. if fail_retry is not None and \ not self.topology.master.is_alive(): msg = "Master is still not reachable. Waiting for %s " \ "seconds to retry detection." % fail_retry self._report(msg, logging.INFO, False) time.sleep(fail_retry) try: self.topology.master.connect() except: pass # Check the master again. If no connection or lost connection, # try ping. This performs the timeout threshold for detecting # a down master. If still not alive, try to reconnect and if # connection fails after 3 attempts, failover. if self.topology.master is None or \ not ping_host(self.topology.master.host, pingtime) or \ not self.topology.master.is_alive(): failover = True i = 0 while i < 3: try: self.topology.master.connect() failover = False # Master is now connected again break except: pass time.sleep(pingtime) i += 1 if failover: self._report("Failed to reconnect to the master after " "3 attemps.", logging.INFO) else: self._report("Master is Ok. Resuming watch.", logging.INFO) if failover: self._report("Master is confirmed to be down or unreachable.", logging.CRITICAL, False) try: self.topology.master.disconnect() except: pass console.clear() if failover_mode == 'auto': self._report("Failover starting in 'auto' mode...") res = self.topology.failover(self.candidates, False) elif failover_mode == 'elect': self._report("Failover starting in 'elect' mode...") res = self.topology.failover(self.candidates, True) else: msg = _FAILOVER_ERROR % ("Master has failed and automatic " "failover is not enabled. ") self._report(msg, logging.CRITICAL, False) # Execute post failover script self.topology.run_script(post_fail, False, [old_host, old_port]) raise UtilRplError(msg, _FAILOVER_ERRNO) if not res: msg = _FAILOVER_ERROR % ("An error was encountered " "during failover. ") self._report(msg, logging.CRITICAL, False) # Execute post failover script self.topology.run_script(post_fail, False, [old_host, old_port]) raise UtilRplError(msg) self.master = self.topology.master console.master = self.master self.topology.remove_discovered_slaves() self.topology.discover_slaves() console.list_data = None print "\nFailover console will restart in 5 seconds." time.sleep(5) console.clear() failover = False # Execute post failover script self.topology.run_script(post_fail, False, [old_host, old_port, self.master.host, self.master.port]) # Unregister existing instances from slaves self._report("Unregistering existing instances from slaves.", logging.INFO, False) console.unregister_slaves(self.topology) # Register instance on the new master self._report("Registering instance on master.", logging.INFO, False) failover_mode = console.register_instance() # discover slaves if option was specified at startup elif (self.options.get("discover", None) is not None and not first_pass): # Force refresh of health list if new slaves found if self.topology.discover_slaves(): console.list_data = None # Check existence of errant transactions on slaves errant_tnx = self.topology.find_errant_transactions() if errant_tnx: if pedantic: print("# WARNING: {0}".format(_ERRANT_TNX_ERROR)) self._report(_ERRANT_TNX_ERROR, logging.WARN, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) print("# {0}".format(errant_msg)) self._report(errant_msg, logging.WARN, False) # Raise an exception (to stop) if pedantic mode is ON raise UtilRplError("{0} Note: If you want to ignore this " "issue, please do not use the " "--pedantic " "option.".format(_ERRANT_TNX_ERROR)) else: if self.logging: warn_msg = ("{0} Check log for more " "details.".format(_ERRANT_TNX_ERROR)) else: warn_msg = _ERRANT_TNX_ERROR console.add_warning('errant_tnx', warn_msg) self._report(_ERRANT_TNX_ERROR, logging.WARN, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) self._report(errant_msg, logging.WARN, False) else: console.del_warning('errant_tnx') res = console.display_console() if res is not None: # None = normal timeout, keep going if not res: return False # Errors detected done = True # User has quit first_pass = False return True mysql-utilities-1.6.4/mysql/utilities/command/__init__.py0000644001577100752670000000003612747670311023300 0ustar pb2usercommon"""mysql.utilities.command""" mysql-utilities-1.6.4/mysql/utilities/command/userclone.py0000644001577100752670000002264712747670311023554 0ustar pb2usercommon# # Copyright (c) 2010, 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the clone user operation. It is used to clone an existing MySQL user to one or more new user accounts copying all grant statements to the new users. """ import sys from mysql.utilities.exception import UtilError, UtilDBError from mysql.utilities.common.server import connect_servers from mysql.utilities.common.format import print_list from mysql.utilities.common.user import User def _show_user_grants(source, user_source, base_user, verbosity): """Show grants for a specific user. """ try: if not user_source: user_source = User(source, base_user, verbosity) print "# Dumping grants for user " + base_user user_source.print_grants() except UtilError: print "# Cannot show grants for user %s." % base_user + \ "Please check user and host for valid names." def show_users(src_val, verbosity, fmt, dump=False): """Show all users except root and anonymous users on the server. src_val[in] a dictionary containing connection information for the source including: (user, password, host, port, socket) verbosty[in] level of information to display fmt[in] format of output dump[in] if True, dump the grants for all users default = False """ conn_options = { 'version': "5.1.0", } servers = connect_servers(src_val, None, conn_options) source = servers[0] if verbosity <= 1: _QUERY = """ SELECT user, host FROM mysql.user WHERE user.user != '' """ cols = ("user", "host") else: _QUERY = """ SELECT user.user, user.host, db FROM mysql.user LEFT JOIN mysql.db ON user.user = db.user AND user.host = db.host WHERE user.user != '' """ cols = ("user", "host", "database") users = source.exec_query(_QUERY) print "# All Users:" print_list(sys.stdout, fmt, cols, users) if dump: for user in users: _show_user_grants(source, None, "'%s'@'%s'" % user[0:2], verbosity) def clone_user(src_val, dest_val, base_user, new_user_list, options): """Clone a user to one or more new user accounts This method will create one or more new user accounts copying the grant statements from a given user. If source and destination are the same, the copy will occur on a single server otherwise, the caller may specify a destination server to where the user accounts will be copied. NOTES: The user is responsible for making sure the databases and objects referenced in the cloned GRANT statements exist prior to running this utility. src_val[in] a dictionary containing connection information for the source including: (user, password, host, port, socket) dest_val[in] a dictionary containing connection information for the destination including: (user, password, host, port, socket) base_user[in] the user account on the source machine to be used as the template for the new users user_list[in] a list of new user accounts in the form: (username:password@host) options[in] optional parameters dictionary including: dump_sql - if True, print grants for base user (no new users are created) force - drop new users if they exist verbosity - print add'l information during operation quiet - do not print information during operation Note: Error messages are printed regardless global_privs - include global privileges (i.e. user@%) Returns bool True = success, raises UtilError if error """ dump_sql = options.get("dump", False) overwrite = options.get("overwrite", False) verbosity = options.get("verbosity", False) quiet = options.get("quiet", False) global_privs = options.get("global_privs", False) # Don't require destination for dumping base user grants conn_options = { 'quiet': quiet, 'version': "5.1.0", } # Add ssl certs if there are any. conn_options['ssl_cert'] = options.get("ssl_cert", None) conn_options['ssl_ca'] = options.get("ssl_ca", None) conn_options['ssl_key'] = options.get("ssl_key", None) if dump_sql: servers = connect_servers(src_val, None, conn_options) else: servers = connect_servers(src_val, dest_val, conn_options) source = servers[0] destination = servers[1] if destination is None: destination = servers[0] # Create an instance of the user class for source. user_source = User(source, base_user, verbosity) # Create an instance of the user class for destination. user_dest = User(destination, base_user, verbosity) # First find out what is the user that will be giving of grants in the # destination server. try: res = destination.exec_query("SELECT CURRENT_USER()") except UtilDBError as err: raise UtilError("Unable to obtain information about the account used " "to connect to the destination server: " "{0}".format(err.errmsg)) # Create an instance of the user who will be giving the privileges. user_priv_giver = User(destination, res[0][0], verbosity) # Check to ensure base user exists. if not user_source.exists(base_user): raise UtilError("Base user does not exist!") # Process dump operation if dump_sql and not quiet: _show_user_grants(source, user_source, base_user, verbosity) return True # Check to ensure new users don't exist. if overwrite is None: for new_user in new_user_list: if user_dest.exists(new_user): raise UtilError("User %s already exists. Use --force " "to drop and recreate user." % new_user) if not quiet: print "# Cloning %d users..." % (len(new_user_list)) # Check privileges to create/delete users. can_create = can_drop = False if user_priv_giver.has_privilege('*', '*', "CREATE_USER"): can_create = can_drop = True else: if user_priv_giver.has_privilege('mysql', '*', "INSERT"): can_create = True if user_priv_giver.has_privilege('mysql', '*', "DELETE"): can_drop = True if not can_create: # Destination user cannot create new users. raise UtilError("Destination user {0}@{1} needs either the " "'CREATE USER' on *.* or 'INSERT' on mysql.* " "privilege to create new users." "".format(user_priv_giver.user, user_priv_giver.host)) # Perform the clone here. Loop through new users and clone. for new_user in new_user_list: if not quiet: print "# Cloning %s to user %s " % (base_user, new_user) # Check to see if user exists. if user_dest.exists(new_user): if not can_drop: # Destination user cannot drop existing users. raise UtilError("Destination user {0}@{1} needs either the " "'CREATE USER' on *.* or 'DELETE' on mysql.* " "privilege to drop existing users." "".format(user_priv_giver.user, user_priv_giver.host)) user_dest.drop(new_user) # Clone user. try: missing_privs = user_priv_giver.missing_user_privileges( user_source, plus_grant_option=True) if not missing_privs: user_source.clone(new_user, destination, global_privs) else: # Our user lacks some privileges, lets create an informative # error message pluralize = '' if len(missing_privs) == 1 else 's' missing_privs_str = ', '.join( ["{0} on {1}.{2}".format(priv, db, table) for priv, db, table in missing_privs]) raise UtilError("User {0} cannot be cloned because destination" " user {1}@{2} is missing the following " "privilege{3}: {4}." "".format(new_user, user_priv_giver.user, user_priv_giver.host, pluralize, missing_privs_str)) except UtilError: raise if not quiet: print "# ...done." return True mysql-utilities-1.6.4/mysql/utilities/command/binlog_admin.py0000644001577100752670000011145412747670311024172 0ustar pb2usercommon# # Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains commands methods methods for working with binary log files. For example: to relocate binary log files to a new location. """ import logging import os.path import shutil from mysql.utilities.common.binary_log_file import ( is_binary_log_filename, filter_binary_logs_by_sequence, filter_binary_logs_by_date, get_index_file, LOG_TYPE_ALL, LOG_TYPE_BIN, LOG_TYPE_RELAY, move_binary_log ) from mysql.utilities.common.binlog import ( determine_purgeable_binlogs, get_active_binlog_and_size, get_binlog_info, purge, rotate, ) from mysql.utilities.common.server import Server from mysql.utilities.common.topology import Topology from mysql.utilities.common.user import check_privileges from mysql.utilities.exception import UtilError BINLOG_OP_MOVE = "perform binary log move" BINLOG_OP_MOVE_DESC = "move binary logs" BINLOG_OP_PURGE = "perform binary log purge" BINLOG_OP_PURGE_DESC = "purge binary logs" BINLOG_OP_ROTATE = "perform binary log rotation" BINLOG_OP_ROTATE_DESC = "rotate binary logs" _ACTION_DATADIR_USED = ("The 'datadir' will be used as base directory for " "{file_type} files.") _ACTION_SEARCH_INDEX = ("The utility will try to find the index file in the" "base directory for {file_type} files.") _CAN_NOT_VERIFY_SLAVES_STATUS = ( "Can not verify the slaves status for the given master {host}:{port}. " "Make sure the slaves are active and accessible." ) _CAN_NOT_VERIFY_SLAVE_STATUS = ( "Can not verify the status for slave {host}:{port}. " "Make sure the slave are active and accessible." ) _COULD_NOT_FIND_BINLOG = ( "WARNING: Could not find the given binlog name: '{bin_name}' " "in the binlog files listed in the {server_name}: {host}:{port}" ) _ERR_MSG_MOVE_FILE = "Unable to move binary file: {filename}\n{error}" _INFO_MSG_APPLY_FILTERS = ("# Applying {filter_type} filter to {file_type} " "files...") _INFO_MSG_FLUSH_LOGS = "# Flushing {log_type} logs..." _INFO_MSG_INDEX_FILE = "# Index file found for {file_type}: {index_file}" _INFO_MSG_MOVE_FILES = "# Moving {file_type} files..." _INFO_MSG_NO_FILES_TO_MOVE = "# No {file_type} files will be moved." _WARN_MSG_FLUSH_LOG_TYPE = ( "# WARNING: Flush for {log_type} logs is not available for server " "'{host}:{port}' (operation skipped). Requires server version >= 5.5.3 ." ) _WARN_MSG_VAL_NOT_REQ_FOR_SERVER = ( "# WARNING: The {value} is not required for server versions >= " "{min_version} (value ignored). Replaced by value for variable " "'{var_name}'." ) _WARN_MSG_VAR_NOT_AVAILABLE = ( "# WARNING: Variable '{var_name}' is not available for server " "'{host}:{port}'. Requires server version >= {min_version}. {action}" ) _WARN_MSG_NO_FILE = "# WARNING: No {file_type} files found to move." def _move_binlogs(source, destination, log_type, options, basename=None, index_file=None, skip_latest=False): """Move binary log files of the specified type. This auxiliary function moves the binary log files of a specific type (i.e., binary or relay) from the given source to the specified destination directory. It gets the files only for the specified binary log type and applies any filtering in accordance to the specified options. Resulting files are moved and the respective index file updated accordingly. source[in] Source location of the binary log files to move. destination[in] Destination directory for the binary log files. log_type[in] Type of the binary log files ('bin' or 'relay'). options[in] Dictionary of options (modified_before, sequence, verbosity). basename[in] Base name for the binary log files, i.e. filename without the extension (sequence number). index_file[in] Path of the binary log index file. If not specified it is assumed to be located in the source directory and determined based on the basename of the first found binary log file. skip_latest[in] Bool value indication if the latest binary log file (with the higher sequence value; in use by the server) will be skipped or not. By default = False, meaning that no binary log file is skipped. Returns the number of files moved. """ verbosity = options['verbosity'] binlog_files = [] file_type = '{0}-log'.format(log_type) if basename: # Ignore path from basename if specified, source is used instead. _, basename = os.path.split(basename) # Get binary log files to move. for _, _, filenames in os.walk(source): for f_name in sorted(filenames): if is_binary_log_filename(f_name, log_type, basename): binlog_files.append(f_name) break if skip_latest: # Skip last file (with the highest sequence). # Note; filenames are sorted by ascending order. binlog_files = binlog_files[:-1] if not binlog_files: # No binary log files found to move. print(_WARN_MSG_NO_FILE.format(file_type=file_type)) else: # Apply filters. sequence = options.get('sequence', None) if sequence: print("#") print(_INFO_MSG_APPLY_FILTERS.format(filter_type='sequence', file_type=file_type)) binlog_files = filter_binary_logs_by_sequence(binlog_files, sequence) modified_before = options.get('modified_before', None) if modified_before: print("#") print(_INFO_MSG_APPLY_FILTERS.format(filter_type='modified date', file_type=file_type)) binlog_files = filter_binary_logs_by_date(binlog_files, source, modified_before) # Move files. print("#") if binlog_files: if index_file is None: # Get binary log index file. index_file = get_index_file(source, binlog_files[0]) if verbosity > 0: print(_INFO_MSG_INDEX_FILE.format(file_type=file_type, index_file=index_file)) print("#") print(_INFO_MSG_MOVE_FILES.format(file_type=file_type)) for f_name in binlog_files: try: print("# - {0}".format(f_name)) move_binary_log(source, destination, f_name, index_file) except (shutil.Error, IOError) as err: raise UtilError(_ERR_MSG_MOVE_FILE.format(filename=f_name, error=err)) return len(binlog_files) else: print(_INFO_MSG_NO_FILES_TO_MOVE.format(file_type=file_type)) return 0 def move_binlogs(binlog_dir, destination, options, bin_basename=None, bin_index=None, relay_basename=None, relay_index=None): """Move binary logs from the given source to the specified destination. This function relocates the binary logs from the given source path to the specified destination directory according to the specified options. binlog_dir[in] Path of the source directory for the binary log files to move. destination[in] Path of the destination directory for the binary log files. options[in] Dictionary of options (log_type, modified_before, sequence, verbosity). bin_basename[in] Base name for the binlog files, i.e. filename without the extension (sequence number). bin_index[in] Path of the binlog index file. If not specified it is assumed to be located in the source directory. relay_basename[in] Base name for the relay log files, i.e. filename without the extension (sequence number). relay_index[in] Path of the relay log index file. If not specified it is assumed to be located in the source directory. skip_latest[in] Bool value indication if the latest binary log file (with the higher sequence value; in use by the server) will be skipped or not. By default = False, meaning that no binary log file is skipped. """ log_type = options['log_type'] # Move binlog files. if log_type in (LOG_TYPE_BIN, LOG_TYPE_ALL): _move_binlogs(binlog_dir, destination, LOG_TYPE_BIN, options, basename=bin_basename, index_file=bin_index) print("#") # Move relay log files. if log_type in (LOG_TYPE_RELAY, LOG_TYPE_ALL): _move_binlogs(binlog_dir, destination, LOG_TYPE_RELAY, options, basename=relay_basename, index_file=relay_index) print("#") print("#...done.\n#") def _check_privileges_to_move_binlogs(server, options): """Check required privileges to move binary logs from server. This method check if the used user possess the required privileges to relocate binary logs from the server. More specifically, the following privilege is required: RELOAD (to flush the binary logs). An exception is thrown if the user doesn't have enough privileges. server[in] Server instance to check. options[in] Dictionary of options (skip_flush_binlogs, verbosity). """ skip_flush = options['skip_flush_binlogs'] verbosity = options['verbosity'] if not skip_flush: check_privileges(server, BINLOG_OP_MOVE, ['RELOAD'], BINLOG_OP_MOVE_DESC, verbosity) def move_binlogs_from_server(server_cnx_val, destination, options, bin_basename=None, bin_index=None, relay_basename=None): """Relocate binary logs from the given server to a new location. This function relocate the binary logs from a MySQL server to the specified destination directory, attending to the specified options. server_cnx_val[in] Dictionary with the connection values for the server. destination[in] Path of the destination directory for the binary log files. options[in] Dictionary of options (log_type, skip_flush_binlogs, modified_before, sequence, verbosity). bin_basename[in] Base name for the binlog files, i.e., same as the value for the server option --log-bin. It replaces the server variable 'log_bin_basename' for versions < 5.6.2, otherwise it is ignored. bin_index[in] Path of the binlog index file. It replaces the server variable 'log_bin_index' for versions < 5.6.4, otherwise it is ignored. relay_basename[in] Base name for the relay log files, i.e., filename without the extension (sequence number). Same as the value for the server option --relay-log. It replaces the server variable 'relay_log_basename' for versions < 5.6.2, otherwise it is ignored. """ log_type = options.get('log_type', LOG_TYPE_BIN) skip_flush = options['skip_flush_binlogs'] verbosity = options['verbosity'] # Connect to server server_options = { 'conn_info': server_cnx_val, } srv = Server(server_options) srv.connect() # Check if the server is running locally (not remote server). if not srv.is_alias('localhost'): raise UtilError("You are using a remote server. This utility must be " "run on the local server. It does not support remote " "access to the binary log files.") # Check privileges. _check_privileges_to_move_binlogs(srv, options) # Process binlog files. if log_type in (LOG_TYPE_BIN, LOG_TYPE_ALL): # Get log_bin_basename (available since MySQL 5.6.2). if srv.check_version_compat(5, 6, 2): if bin_basename: print(_WARN_MSG_VAL_NOT_REQ_FOR_SERVER.format( value='bin basename', min_version='5.6.2', var_name='log_bin_basename')) binlog_basename = srv.select_variable('log_bin_basename') if verbosity > 0: print("#") print("# log_bin_basename: {0}".format(binlog_basename)) binlog_source, binlog_file = os.path.split(binlog_basename) # Get log_bin_index (available since MySQL 5.6.4). if srv.check_version_compat(5, 6, 4): if bin_index: print(_WARN_MSG_VAL_NOT_REQ_FOR_SERVER.format( value='bin index', min_version='5.6.4', var_name='log_bin_index')) binlog_index = srv.select_variable('log_bin_index') else: binlog_index = None action = _ACTION_SEARCH_INDEX.format(file_type='bin-log') print(_WARN_MSG_VAR_NOT_AVAILABLE.format( var_name='log_bin_basename', host=srv.host, port=srv.port, min_version='5.6.4', action=action)) if verbosity > 0: print("# log_bin_index: {0}".format(binlog_index)) else: if bin_basename: binlog_source, binlog_file = os.path.split(bin_basename) else: action = _ACTION_DATADIR_USED.format(file_type='bin-log') print(_WARN_MSG_VAR_NOT_AVAILABLE.format( var_name='log_bin_basename', host=srv.host, port=srv.port, min_version='5.6.2', action=action)) # Get datadir value. binlog_source = srv.select_variable('datadir') binlog_file = None if verbosity > 0: print("#") print("# datadir: {0}".format(binlog_source)) binlog_index = bin_index # Move binlog files. num_files = _move_binlogs( binlog_source, destination, LOG_TYPE_BIN, options, basename=binlog_file, index_file=binlog_index, skip_latest=True) print("#") # Flush binary logs to reload server's cache after move. if not skip_flush and num_files > 0: # Note: log_type for FLUSH available since MySQL 5.5.3. if srv.check_version_compat(5, 5, 3): print(_INFO_MSG_FLUSH_LOGS.format(log_type='binary')) srv.flush_logs(log_type='BINARY') else: print(_WARN_MSG_FLUSH_LOG_TYPE.format(log_type='binary', host=srv.host, port=srv.port)) print("#") if log_type in (LOG_TYPE_RELAY, LOG_TYPE_ALL): # Get relay_log_basename (available since MySQL 5.6.2). if srv.check_version_compat(5, 6, 2): if relay_basename: print(_WARN_MSG_VAL_NOT_REQ_FOR_SERVER.format( value='relay basename', min_version='5.6.2', var_name='relay_log_basename')) relay_log_basename = srv.select_variable('relay_log_basename') if verbosity > 0: print("#") print("# relay_log_basename: {0}".format(relay_log_basename)) relay_source, relay_file = os.path.split(relay_log_basename) else: if relay_basename: relay_source, relay_file = os.path.split(relay_basename) else: action = _ACTION_DATADIR_USED.format(file_type='relay-log') print(_WARN_MSG_VAR_NOT_AVAILABLE.format( var_name='relay_log_basename', host=srv.host, port=srv.port, min_version='5.6.2', action=action)) # Get datadir value. relay_source = srv.select_variable('datadir') relay_file = None if verbosity > 0: print("#") print("# datadir: {0}".format(relay_source)) # Get relay_log_index (available for all supported versions). relay_log_index = srv.select_variable('relay_log_index') if verbosity > 0: print("# relay_log_index: {0}".format(relay_log_index)) # Move relay log files. num_files = _move_binlogs( relay_source, destination, LOG_TYPE_RELAY, options, basename=relay_file, index_file=relay_log_index, skip_latest=True) print("#") # Flush relay logs to reload server's cache after move. if not skip_flush and num_files > 0: # Note: log_type for FLUSH available since MySQL 5.5.3. if srv.check_version_compat(5, 5, 3): print(_INFO_MSG_FLUSH_LOGS.format(log_type='relay')) srv.flush_logs(log_type='RELAY') else: print(_WARN_MSG_FLUSH_LOG_TYPE.format(log_type='relay', host=srv.host, port=srv.port)) print("#") print("#...done.\n#") def _report_binlogs(binlog_list, reporter, removed=False): """Reports the binary files available and removed. binlog_list[in] A list of binlog file names. reporter[in] A reporter that receives the messages as parameter removed[in] The given list of binlog file names are removed files. Default is False, meaning files are available. Uses the reporter to reports the binary files available and removed. """ if removed: msg = ("binlog file", "purged") else: msg = ("binlog file", "available") if len(binlog_list) == 1: reporter("# {0} {1}: {2}" "".format(msg[0].capitalize(), msg[1], binlog_list[0])) if len(binlog_list) > 1: end_range = "from {0} to {1}".format(binlog_list[0], binlog_list[-1]) reporter("# Range of {0}s {1}: {2}" "".format(msg[0], msg[1], end_range)) def binlog_purge(server_cnx_val, master_cnx_val, slaves_cnx_val, options): """Purge binary log. Purges the binary logs from a server, it will purge all of the binlogs older than the active binlog file or the given target binlog index. For a master server determines the latest log file to purge among all the slaves, which becomes the target file to purge binary logs to, in case no other file is specified. server_cnx_val[in] Server connection dictionary. master_cnx_val[in] Master server connection dictionary. slaves_cnx_val[in] Slave server connection dictionary. options[in] Options dictionary. to_binlog_name The target binlog index, in case doesn't want to use the active binlog file or the index last in use in a replication scenario. verbosity print extra data during operations default level value = 0 discover discover the list of slaves associated to the specified login (user and password). dry_run Don't actually rotate the active binlog, instead it will print information about file name and size. """ assert not (server_cnx_val is None and master_cnx_val is None), \ "At least one of server_cnx_val or master_cnx_val must be a valid"\ " dictionary with server connection values" if master_cnx_val is not None: purger = RPLBinaryLogPurge(master_cnx_val, slaves_cnx_val, options) else: purger = BinaryLogPurge(server_cnx_val, options) purger.purge() class BinaryLogPurge(object): """BinaryLogPurge """ def __init__(self, server_cnx_val, options): """Initiator. server_cnx_val[in] Server connection dictionary. options[in] Options dictionary. """ self.server_cnx_val = server_cnx_val self.server = None self.options = options self.verbosity = self.options.get("verbosity", 0) self.quiet = self.options.get("quiet", False) self.logging = self.options.get("logging", False) self.dry_run = self.options.get("dry_run", 0) self.to_binlog_name = self.options.get("to_binlog_name", False) def _report(self, message, level=logging.INFO, print_msg=True): """Log message if logging is on. This method will log the message presented if the log is turned on. Specifically, if options['log_file'] is not None. It will also print the message to stdout. message[in] Message to be printed. level[in] Level of message to log. Default = INFO. print_msg[in] If True, print the message to stdout. Default = True. """ # First, print the message. if print_msg and not self.quiet: print(message) # Now log message if logging turned on if self.logging: logging.log(int(level), message.strip("#").strip(" ")) def get_target_binlog_index(self, binlog_file_name): """Retrieves the target binlog file index. Retrieves the target binlog file index that will used in the purge query, by the fault the latest log not in use unless the user specifies a different target which is validated against the server's binlog base name. binlog_file_name[in] the binlog base file name used by the server. Returns the target index binlog file """ if self.to_binlog_name: to_binlog_name = self.to_binlog_name.split('.')[0] if to_binlog_name != binlog_file_name: raise UtilError( "The given binlog file name: '{0}' differs " "from the used by the server: '{1}'" "". format(to_binlog_name, binlog_file_name)) else: to_binlog_index = int(self.to_binlog_name.split('.')[1]) return to_binlog_index return None def _purge(self, index_last_in_use, active_binlog_file, binlog_file_name, target_binlog_index=None, server=None, server_is_master=False): """The inner purge method. Purges the binary logs from the given server, it will purge all of the binlogs older than the active_binlog_file ot to target_binlog_index. index_last_in_use[in] The index of the latest binary log not in use. in case of a Master, must be the latest binlog caought by all the slaves. active_binlog_file[in] Current active binlog file. binlog_file_name[in] Binlog base file name. target_binlog_index[in] The target binlog index, in case doesn't want to use the index_last_in_use by default None. server[in] Server object where to purge the binlogs from, by default self.server is used. server_is_master[in] Indicates if the given server is a Master, used for report purposes by default False. """ if server is None: server = self.server if server_is_master: server_name = "master" else: server_name = "server" # The purge_to_binlog file used to purge query based on earliest log # not in use z_len = len(active_binlog_file.split('.')[1]) purge_to_binlog = ( "{0}.{1}".format(binlog_file_name, repr(index_last_in_use).zfill(z_len)) ) server_binlogs_list = server.get_server_binlogs_list() if self.verbosity >= 1: _report_binlogs(server_binlogs_list, self._report) # The last_binlog_not_in_use used for information purposes index_last_not_in_use = index_last_in_use - 1 last_binlog_not_in_use = ( "{0}.{1}".format(binlog_file_name, repr(index_last_not_in_use).zfill(z_len)) ) if server_is_master: self._report("# Latest binlog file replicated by all slaves: " "{0}".format(last_binlog_not_in_use)) if target_binlog_index is None: # Purge to latest binlog not in use if self.verbosity > 0: self._report("# Latest not active binlog" " file: {0}".format(last_binlog_not_in_use)) # last_binlog_not_in_use purge(server, purge_to_binlog, server_binlogs_list, reporter=self._report, dryrun=self.dry_run, verbosity=self.verbosity) else: purge_to_binlog = ( "{0}.{1}".format(binlog_file_name, repr(target_binlog_index).zfill(z_len)) ) if purge_to_binlog not in server_binlogs_list: self._report( _COULD_NOT_FIND_BINLOG.format(bin_name=self.to_binlog_name, server_name=server_name, host=server.host, port=server.port)) return if target_binlog_index > index_last_in_use: self._report("WARNING: The given binlog name: '{0}' is " "required for one or more slaves, the Utilitiy " "will purge to binlog '{1}' instead." "".format(self.to_binlog_name, last_binlog_not_in_use)) target_binlog_index = last_binlog_not_in_use # last_binlog_not_in_use purge(server, purge_to_binlog, server_binlogs_list, reporter=self._report, dryrun=self.dry_run, verbosity=self.verbosity) server_binlogs_list_after = server.get_server_binlogs_list() if self.verbosity >= 1: _report_binlogs(server_binlogs_list_after, self._report) for binlog in server_binlogs_list_after: if binlog in server_binlogs_list: server_binlogs_list.remove(binlog) if self.verbosity >= 1 and server_binlogs_list: _report_binlogs(server_binlogs_list, self._report, removed=True) def purge(self): """The purge method for a standalone server. Determines the latest log file to purge, which becomes the target file to purge binary logs to in case no other file is specified. """ # Connect to server self.server = Server({'conn_info': self.server_cnx_val}) self.server.connect() # Check required privileges check_privileges(self.server, BINLOG_OP_PURGE, ["SUPER", "REPLICATION SLAVE"], BINLOG_OP_PURGE_DESC, self.verbosity, self._report) # retrieve active binlog info binlog_file_name, active_binlog_file, index_last_in_use = ( get_binlog_info(self.server, reporter=self._report, server_name="server", verbosity=self.verbosity) ) # Verify this server is not a Master. processes = self.server.exec_query("SHOW PROCESSLIST") binlog_dump = False for process in processes: if process[4] == "Binlog Dump": binlog_dump = True break hosts = self.server.exec_query("SHOW SLAVE HOSTS") if binlog_dump or hosts: if hosts and not self.verbosity: msg_v = " For more info use verbose option." else: msg_v = "" if self.verbosity >= 1: for host in hosts: self._report("# WARNING: Slave with id:{0} at {1}:{2} " "is connected to this server." "".format(host[0], host[1], host[2])) raise UtilError("The given server is acting as a master and has " "slaves connected to it. To proceed please use the" " --master option.{0}".format(msg_v)) target_binlog_index = self.get_target_binlog_index(binlog_file_name) self._purge(index_last_in_use, active_binlog_file, binlog_file_name, target_binlog_index) class RPLBinaryLogPurge(BinaryLogPurge): """RPLBinaryLogPurge class """ def __init__(self, master_cnx_val, slaves_cnx_val, options): """Initiator. master_cnx_val[in] Master server connection dictionary. slaves_cnx_val[in] Slave server connection dictionary. options[in] Options dictionary. """ super(RPLBinaryLogPurge, self).__init__(None, options) self.master_cnx_val = master_cnx_val self.slaves_cnx_val = slaves_cnx_val self.topology = None self.master = None self.slaves = None def purge(self): """The Purge Method Determines the latest log file to purge among all the slaves, which becomes the target file to purge binary logs to, in case no other file is specified. """ # Create a topology object to verify the connection between master and # slaves servers. self.topology = Topology(self.master_cnx_val, self.slaves_cnx_val, self.options, skip_conn_err=False) self.master = self.topology.master self.slaves = self.topology.slaves # Check required privileges check_privileges(self.master, BINLOG_OP_PURGE, ["SUPER", "REPLICATION SLAVE"], BINLOG_OP_PURGE_DESC, self.verbosity, self._report) # Get binlog info binlog_file_name, active_binlog_file, active_binlog_index = ( get_binlog_info(self.master, reporter=self._report, server_name="master", verbosity=self.verbosity) ) # Verify this Master has at least one slave. if not self.slaves: errormsg = ( _CAN_NOT_VERIFY_SLAVES_STATUS.format(host=self.master.host, port=self.master.port)) raise UtilError(errormsg) # verify the given slaves are connected to this Master. if self.slaves_cnx_val and self.slaves: for slave in self.slaves: slave['instance'].is_configured_for_master(self.master, verify_state=False, raise_error=True) # IO running verification for --slaves option if not slave['instance'].is_connected(): if self.verbosity: self._report("# Slave '{0}:{1}' IO not running" "".format(slave['host'], slave['port'])) raise UtilError( _CAN_NOT_VERIFY_SLAVE_STATUS.format(host=slave['host'], port=slave['port']) ) target_binlog_index = self.get_target_binlog_index(binlog_file_name) index_last_in_use = determine_purgeable_binlogs( active_binlog_index, self.slaves, reporter=self._report, verbosity=self.verbosity ) self._purge(index_last_in_use, active_binlog_file, binlog_file_name, target_binlog_index, server=self.master, server_is_master=True) def binlog_rotate(server_val, options): """Rotate binary log. This function creates a BinaryLogRotate task. server_cnx_val[in] Dictionary with the connection values for the server. options[in] options for controlling behavior: logging If logging is active or not. verbose print extra data during operations (optional) default value = False min_size minimum size that the active binlog must have prior to rotate it. dry_run Don't actually rotate the active binlog, instead it will print information about file name and size. """ binlog_rotate = BinaryLogRotate(server_val, options) binlog_rotate.rotate() class BinaryLogRotate(object): """The BinaryLogRotate Class, it represents a binary log rotation task. The rotate method performs the following tasks: - Retrieves the active binary log and file size. - If the minimum size is given, evaluate active binlog file size, and if this is greater than the minimum size rotation will occur. rotation occurs. """ def __init__(self, server_cnx_val, options): """Initiator. server_cnx_val[in] Dictionary with the connection values for the server. options[in] options for controlling behavior: logging If logging is active or not. verbose print extra data during operations (optional) default value = False min_size minimum size that the active binlog must have prior to rotate it. dry_run Don't actually rotate the active binlog, instead it will print information about file name and size. """ # Connect to server self.server = Server({'conn_info': server_cnx_val}) self.server.connect() self.options = options self.verbosity = self.options.get("verbosity", 0) self.quiet = self.options.get("quiet", False) self.logging = self.options.get("logging", False) self.dry_run = self.options.get("dry_run", 0) self.binlog_min_size = self.options.get("min_size", False) def _report(self, message, level=logging.INFO, print_msg=True): """Log message if logging is on. This method will log the message presented if the log is turned on. Specifically, if options['log_file'] is not None. It will also print the message to stdout. message[in] Message to be printed. level[in] Level of message to log. Default = INFO. print_msg[in] If True, print the message to stdout. Default = True. """ # First, print the message. if print_msg and not self.quiet: print(message) # Now log message if logging turned on if self.logging: msg = message.strip("#").strip(" ") logging.log(int(level), msg) def rotate(self): """This Method runs the rotation. This method will use the methods from the common library to rotate the binary log. """ # Check required privileges check_privileges(self.server, BINLOG_OP_ROTATE, ["RELOAD", "REPLICATION CLIENT"], BINLOG_OP_ROTATE_DESC, self.verbosity, self._report) active_binlog, binlog_size = get_active_binlog_and_size(self.server) if self.verbosity: self._report("# Active binlog file: '{0}' (size: {1} bytes)'" "".format(active_binlog, binlog_size)) if self.binlog_min_size: rotated = rotate(self.server, self.binlog_min_size, reporter=self._report) else: rotated = rotate(self.server, reporter=self._report) if rotated: new_active_binlog, _ = get_active_binlog_and_size(self.server) if active_binlog == new_active_binlog: raise UtilError("Unable to rotate binlog file.") else: self._report("# The binlog file has been rotated.") if self.verbosity: self._report("# New active binlog file: '{0}'" "".format(new_active_binlog)) mysql-utilities-1.6.4/mysql/utilities/command/serverclone.py0000755001577100752670000005224512747670311024104 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the clone server utility which launches a new instance of an existing server. """ import getpass import os import subprocess import tempfile import time import shlex import shutil from mysql.utilities.common.tools import (check_port_in_use, estimate_free_space, get_mysqld_version, get_tool_path) from mysql.utilities.common.messages import WARN_OPT_SKIP_INNODB from mysql.utilities.common.server import Server from mysql.utilities.exception import UtilError MAX_DATADIR_SIZE = 200 MAX_SOCKET_PATH_SIZE = 107 # Required free disk space in MB to create the data directory. REQ_FREE_SPACE = 120 LOW_SPACE_ERRR_MSG = ("The new data directory {directory} has low free space" "remaining, please free some space and try again. \n" "mysqlserverclone needs at least {megabytes} MB to run " "the new server instance.\nUse force option to ignore " "this Error.") # Set of sql statements to use during server bootstrap to create the # root@localhost user account for MySQL versions equal or greater than 5.7.5 _CREATE_ROOT_USER = [ "CREATE TEMPORARY TABLE tmp_user LIKE user;\n", ("REPLACE INTO tmp_user (Host, User, Password, Select_priv, Insert_priv, " "Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, " "Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, " "Index_priv, Alter_priv, Show_db_priv, Super_priv, " "Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, " "Repl_client_priv, Create_view_priv, Show_view_priv, " "Create_routine_priv, Alter_routine_priv, Create_user_priv, Event_priv, " "Trigger_priv, Create_tablespace_priv, ssl_cipher, x509_issuer, " "x509_subject) VALUES ('localhost', 'root', '', 'Y', 'Y', 'Y', 'Y', 'Y', " "'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', " "'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', '','', '');\n"), "REPLACE INTO user SELECT * FROM tmp_user WHERE @had_user_table=0;\n" "DROP TABLE tmp_user;\n"] def clone_server(conn_val, options): """Clone an existing server This method creates a new instance of a running server using a datadir set to the new_data parametr, with a port set to new_port, server_id set to new_id and a root password of root_pass. You can also specify additional parameters for the mysqld command line as well as turn on verbosity mode to display more diagnostic information during the clone process. The method will build a new base database installation from the .sql files used to construct a new installation. Once the database is created, the server will be started. dest_val[in] a dictionary containing connection information including: (user, password, host, port, socket) options[in] dictionary of options: new_data[in] An existing path to create the new database and use as datadir for new instance (default = None) new_port[in] Port number for new instance (default = 3307) new_id[in] Server_id for new instance (default = 2) root_pass[in] Password for root user on new instance (optional) mysqld_options[in] Additional command line options for mysqld verbosity[in] Print additional information during operation (default is 0) quiet[in] If True, do not print messages. (default is False) cmd_file[in] file name to write startup command start_timeout[in] Number of seconds to wait for server to start """ new_data = os.path.abspath(options.get('new_data', None)) new_port = options.get('new_port', '3307') root_pass = options.get('root_pass', None) verbosity = options.get('verbosity', 0) user = options.get('user', 'root') quiet = options.get('quiet', False) cmd_file = options.get('cmd_file', None) start_timeout = int(options.get('start_timeout', 10)) mysqld_options = options.get('mysqld_options', '') force = options.get('force', False) quote_char = "'" if os.name == "posix" else '"' if not check_port_in_use('localhost', int(new_port)): raise UtilError("Port {0} in use. Please choose an " "available port.".format(new_port)) # Check if path to database files is greater than MAX_DIR_SIZE char, if len(new_data) > MAX_DATADIR_SIZE and not force: raise UtilError("The --new-data path '{0}' is too long " "(> {1} characters). Please use a smaller one. " "You can use the --force option to skip this " "check".format(new_data, MAX_DATADIR_SIZE)) # Clone running server if conn_val is not None: # Try to connect to the MySQL database server. server1_options = { 'conn_info': conn_val, 'role': "source", } server1 = Server(server1_options) server1.connect() if not quiet: print "# Cloning the MySQL server running on %s." % \ conn_val["host"] # Get basedir rows = server1.exec_query("SHOW VARIABLES LIKE 'basedir'") if not rows: raise UtilError("Unable to determine basedir of running server.") basedir = os.path.normpath(rows[0][1]) # Cloning downed or offline server else: basedir = os.path.abspath(options.get("basedir", None)) if not quiet: print "# Cloning the MySQL server located at %s." % basedir new_data_deleted = False # If datadir exists, has data, and user said it was Ok, delete it if os.path.exists(new_data) and options.get("delete", False) and \ os.listdir(new_data): new_data_deleted = True shutil.rmtree(new_data, True) # Create new data directory if it does not exist if not os.path.exists(new_data): if not quiet: print "# Creating new data directory..." try: os.mkdir(new_data) except OSError as err: raise UtilError("Unable to create directory '{0}', reason: {1}" "".format(new_data, err.strerror)) # After create the new data directory, check for free space, so the errors # regarding invalid or inaccessible path had been dismissed already. # If not force specified verify and stop if there is not enough free space if not force and os.path.exists(new_data) and \ estimate_free_space(new_data) < REQ_FREE_SPACE: # Don't leave empty folders, delete new_data if was previously deleted if os.path.exists(new_data) and new_data_deleted: shutil.rmtree(new_data, True) raise UtilError(LOW_SPACE_ERRR_MSG.format(directory=new_data, megabytes=REQ_FREE_SPACE)) # Check for warning of using --skip-innodb mysqld_path = get_tool_path(basedir, "mysqld") version_str = get_mysqld_version(mysqld_path) # convert version_str from str tuple to integer tuple if possible if version_str is not None: version = tuple([int(digit) for digit in version_str]) else: version = None if mysqld_options is not None and ("--skip-innodb" in mysqld_options or "--innodb" in mysqld_options) and version is not None and \ version >= (5, 7, 5): print("# WARNING: {0}".format(WARN_OPT_SKIP_INNODB)) if not quiet: print "# Configuring new instance..." print "# Locating mysql tools..." mysqladmin_path = get_tool_path(basedir, "mysqladmin") mysql_basedir = basedir if os.path.exists(os.path.join(basedir, "local/mysql/share/")): mysql_basedir = os.path.join(mysql_basedir, "local/mysql/") # for source trees elif os.path.exists(os.path.join(basedir, "/sql/share/english/")): mysql_basedir = os.path.join(mysql_basedir, "/sql/") locations = [ ("mysqld", mysqld_path), ("mysqladmin", mysqladmin_path), ] # From 5.7.6 version onwards, bootstrap is done via mysqld with the # --initialize-insecure option, so no need to get information about the # sql system tables that need to be loaded. if version < (5, 7, 6): system_tables = get_tool_path(basedir, "mysql_system_tables.sql", False) system_tables_data = get_tool_path(basedir, "mysql_system_tables_data.sql", False) test_data_timezone = get_tool_path(basedir, "mysql_test_data_timezone.sql", False) help_data = get_tool_path(basedir, "fill_help_tables.sql", False) locations.extend([("mysql_system_tables.sql", system_tables), ("mysql_system_tables_data.sql", system_tables_data), ("mysql_test_data_timezone.sql", test_data_timezone), ("fill_help_tables.sql", help_data), ]) if verbosity >= 3 and not quiet: print "# Location of files:" if cmd_file is not None: locations.append(("write startup command to", cmd_file)) for location in locations: print "# % 28s: %s" % location # Create the new mysql data with mysql_import_db-like process if not quiet: print "# Setting up empty database and mysql tables..." fnull = open(os.devnull, 'w') # For MySQL versions before 5.7.6, use regular bootstrap procedure. if version < (5, 7, 6): # Get bootstrap SQL statements sql = list() sql.append("CREATE DATABASE mysql;") sql.append("USE mysql;") innodb_disabled = False if mysqld_options: innodb_disabled = '--innodb=OFF' in mysqld_options for sqlfile in [system_tables, system_tables_data, test_data_timezone, help_data]: lines = open(sqlfile, 'r').readlines() # On MySQL 5.7.5, the root@localhost account creation was # moved from the system_tables_data sql file into the # mysql_install_db binary. Since we don't use mysql_install_db # directly we need to create the root user account ourselves. if (version is not None and version == (5, 7, 5) and sqlfile == system_tables_data): lines.extend(_CREATE_ROOT_USER) for line in lines: line = line.strip() # Don't fail when InnoDB is turned off (Bug#16369955) # (Ugly hack) if (sqlfile == system_tables and "SET @sql_mode_orig==@@SES" in line and innodb_disabled): for line in lines: if 'SET SESSION sql_mode=@@sql' in line: break sql.append(line) # Bootstap to setup mysql tables cmd = [ mysqld_path, "--no-defaults", "--bootstrap", "--datadir={0}".format(new_data), "--basedir={0}".format(os.path.abspath(mysql_basedir)), ] if verbosity >= 1 and not quiet: proc = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE) else: proc = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE, stdout=fnull, stderr=fnull) proc.communicate('\n'.join(sql)) # From 5.7.6 onwards, mysql_install_db has been replaced by mysqld and # the --initialize option else: cmd = [ mysqld_path, "--no-defaults", "--initialize-insecure=on", "--datadir={0}".format(new_data), "--basedir={0}".format(os.path.abspath(mysql_basedir)) ] if verbosity >= 1 and not quiet: proc = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE) else: proc = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE, stdout=fnull, stderr=fnull) # Wait for subprocess to finish res = proc.wait() # Kill subprocess just in case it didn't finish - Ok if proc doesn't exist if int(res) != 0: if os.name == "posix": try: os.kill(proc.pid, subprocess.signal.SIGTERM) except OSError as error: if not error.strerror.startswith("No such process"): raise UtilError("Failed to kill process with pid '{0}'" "".format(proc.pid)) else: ret_code = subprocess.call("taskkill /F /T /PID " "{0}".format(proc.pid), shell=True) # return code 0 means it was successful and 128 means it tried # to kill a process that doesn't exist if ret_code not in (0, 128): raise UtilError("Failed to kill process with pid '{0}'. " "Return code {1}".format(proc.pid, ret_code)) # Drop the bootstrap file if os.path.isfile("bootstrap.sql"): os.unlink("bootstrap.sql") # Start the instance if not quiet: print "# Starting new instance of the server..." # If the user is not the same as the user running the script... # and this is a Posix system... and we are running as root if user_change_as_root(options): subprocess.call(['chown', '-R', user, new_data]) subprocess.call(['chgrp', '-R', user, new_data]) socket_path = os.path.join(new_data, 'mysql.sock') # If socket path is too long, use mkdtemp to create a tmp dir and # use it instead to store the socket if os.name == 'posix' and len(socket_path) > MAX_SOCKET_PATH_SIZE: socket_path = os.path.join(tempfile.mkdtemp(), 'mysql.sock') if not quiet: print("# WARNING: The socket file path '{0}' is too long (>{1}), " "using '{2}' instead".format( os.path.join(new_data, 'mysql.sock'), MAX_SOCKET_PATH_SIZE, socket_path)) cmd = { 'datadir': '--datadir={0}'.format(new_data), 'tmpdir': '--tmpdir={0}'.format(new_data), 'pid-file': '--pid-file={0}'.format( os.path.join(new_data, "clone.pid")), 'port': '--port={0}'.format(new_port), 'server': '--server-id={0}'.format(options.get('new_id', 2)), 'basedir': '--basedir={0}'.format(mysql_basedir), 'socket': '--socket={0}'.format(socket_path), } if user: cmd.update({'user': '--user={0}'.format(user)}) if mysqld_options: if isinstance(mysqld_options, (list, tuple)): cmd.update(dict(zip(mysqld_options, mysqld_options))) else: new_opts = mysqld_options.strip(" ") # Drop the --mysqld= if new_opts.startswith("--mysqld="): new_opts = new_opts[9:] if new_opts.startswith('"') and new_opts.endswith('"'): list_ = shlex.split(new_opts.strip('"')) cmd.update(dict(zip(list_, list_))) elif new_opts.startswith("'") and new_opts.endswith("'"): list_ = shlex.split(new_opts.strip("'")) cmd.update(dict(zip(list_, list_))) # Special case where there is only 1 option elif len(new_opts.split("--")) == 1: cmd.update({mysqld_options: mysqld_options}) else: list_ = shlex.split(new_opts) cmd.update(dict(zip(list_, list_))) # set of options that must be surrounded with quotes options_to_quote = set(["datadir", "tmpdir", "basedir", "socket", "pid-file"]) # Strip spaces from each option for key in cmd: cmd[key] = cmd[key].strip(' ') # Write startup command if specified if cmd_file is not None: if verbosity >= 0 and not quiet: print "# Writing startup command to file." cfile = open(cmd_file, 'w') comment = " Startup command generated by mysqlserverclone.\n" if os.name == 'posix' and cmd_file.endswith('.sh'): cfile.write("#!/bin/sh\n") cfile.write("#{0}".format(comment)) elif os.name == 'nt' and cmd_file.endswith('.bat'): cfile.write("REM{0}".format(comment)) else: cfile.write("#{0}".format(comment)) start_cmd_lst = ["{0}{1}{0} --no-defaults".format(quote_char, mysqld_path)] # build start command for key, val in cmd.iteritems(): if key in options_to_quote: val = "{0}{1}{0}".format(quote_char, val) start_cmd_lst.append(val) cfile.write("{0}\n".format(" ".join(start_cmd_lst))) cfile.close() if os.name == "nt" and verbosity >= 1: cmd.update({"console": "--console"}) start_cmd_lst = [mysqld_path, "--no-defaults"] sorted_keys = sorted(cmd.keys()) start_cmd_lst.extend([cmd[val] for val in sorted_keys]) if verbosity >= 1 and not quiet: if verbosity >= 2: print("# Startup command for new server:\n" "{0}".format(" ".join(start_cmd_lst))) proc = subprocess.Popen(start_cmd_lst, shell=False) else: proc = subprocess.Popen(start_cmd_lst, shell=False, stdout=fnull, stderr=fnull) # Try to connect to the new MySQL instance if not quiet: print "# Testing connection to new instance..." new_sock = None if os.name == "posix": new_sock = socket_path port_int = int(new_port) conn = { "user": "root", "passwd": "", "host": conn_val["host"] if conn_val is not None else "localhost", "port": port_int, "unix_socket": new_sock } server2_options = { 'conn_info': conn, 'role': "clone", } server2 = Server(server2_options) i = 0 while i < start_timeout: i += 1 time.sleep(1) try: server2.connect() i = start_timeout + 1 except: pass finally: if verbosity >= 1 and not quiet: print "# trying again..." if i == start_timeout: raise UtilError("Unable to communicate with new instance. " "Process id = {0}.".format(proc.pid)) elif not quiet: print "# Success!" # Set the root password if root_pass: if not quiet: print "# Setting the root password..." cmd = [mysqladmin_path, '--no-defaults', '-v', '-uroot'] if os.name == "posix": cmd.append("--socket={0}".format(new_sock)) else: cmd.append("--port={0}".format(int(new_port))) cmd.extend(["password", root_pass]) if verbosity > 0 and not quiet: proc = subprocess.Popen(cmd, shell=False) else: proc = subprocess.Popen(cmd, shell=False, stdout=fnull, stderr=fnull) # Wait for subprocess to finish res = proc.wait() if not quiet: conn_str = "# Connection Information:\n" conn_str += "# -uroot" if root_pass: conn_str += " -p%s" % root_pass if os.name == "posix": conn_str += " --socket=%s" % new_sock else: conn_str += " --port=%s" % new_port print conn_str print "#...done." fnull.close() def user_change_as_root(options): """ Detect if the user context must change for spawning server as root This method checks to see if the current user executing the utility is root and there is a different user being requested. If the user being requested is None or is root and we are running as root or the user being requested is the same as the current user, the method returns False. Note: This method only works for POSIX systems. It returns False for non-POSIX systems. options[in] Option dictionary Returns bool - user context must occur """ user = options.get('user', 'root') if not user or not os.name == 'posix': return False return not getpass.getuser() == user and getpass.getuser() == 'root' mysql-utilities-1.6.4/mysql/utilities/command/grep.py0000644001577100752670000002420012747670311022475 0ustar pb2usercommon# # Copyright (c) 2010, 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains grep for objects. """ import sys import mysql.connector from mysql.utilities.exception import FormatError, EmptyResultError, UtilError from mysql.utilities.common.ip_parser import parse_connection from mysql.utilities.common.format import print_list from mysql.utilities.common.options import obj2sql from mysql.utilities.common.server import set_ssl_opts_in_connection_info # Mapping database object to information schema names and fields. I # wish that the tables would have had simple names and not qualify the # names with the kind as well, e.g., not "table.table_name" but rather # "table.name". If that was the case, these kinds of scripts would be # a lot easier to develop. # # The fields in each entry are: # # field_name # The name of the column in the table where the field name to match # can be found. # field_type # The name of the type of the field. Usually a string. # object_name # The name of the column where the name of the object being searched # can be found. # object_type # The name of the type of the object being searched. Usually a # string. # schema_field # The name of the field where the schema name can be found. # table_name # The name of the information schema table to search in. # [body_field] # The name of the field in the table where the body of the object # can be found. This is an optional entry since not all objects have # bodies. _OBJMAP = { 'partition': { 'field_name': 'partition_name', 'object_name': 'table_name', 'object_type': "'TABLE'", 'schema_field': 'table_schema', 'table_name': 'partitions', }, 'column': { 'field_name': 'column_name', 'object_name': 'table_name', 'table_name': 'columns', 'object_type': "'TABLE'", 'schema_field': 'table_schema', }, 'table': { 'field_name': 'table_name', 'object_name': 'table_name', 'table_name': 'tables', 'object_type': "'TABLE'", 'schema_field': 'table_schema', }, 'event': { 'field_name': 'event_name', 'object_name': 'event_name', 'table_name': 'events', 'object_type': "'EVENT'", 'schema_field': 'event_schema', 'body_field': 'event_body', }, 'routine': { 'field_name': 'routine_name', 'object_name': 'routine_name', 'table_name': 'routines', 'object_type': 'routine_type', 'schema_field': 'routine_schema', 'body_field': 'routine_body', }, 'trigger': { 'field_name': 'trigger_name', 'object_name': 'trigger_name', 'table_name': 'triggers', 'object_type': "'TRIGGER'", 'schema_field': 'trigger_schema', 'body_field': 'action_statement', }, 'database': { 'field_name': 'schema_name', 'object_name': 'schema_name', 'table_name': 'schemata', 'object_type': "'SCHEMA'", 'schema_field': 'schema_name', }, 'view': { 'field_name': 'table_name', 'object_name': 'table_name', 'table_name': 'views', 'object_type': "'VIEW'", 'schema_field': 'table_schema', 'body_field': 'view_definition', }, 'user': { 'select_option': 'DISTINCT', 'field_name': 'grantee', 'object_name': 'grantee', 'table_name': 'schema_privileges', 'object_type': "'USER'", 'schema_field': 'table_schema', 'body_field': 'privilege_type', }, } _GROUP_MATCHES_FRM = """ SELECT `Object Type`, `Object Name`, `Database`, `Field Type`, GROUP_CONCAT(`Field`) AS `Matches` FROM ({0}) AS all_results GROUP BY `Object Type`, `Database`, `Object Name`, `Field Type`""" _SELECT_TYPE_FRM = """ SELECT {select_option} {object_type} AS `Object Type`, {object_name} AS `Object Name`, {schema_field} AS `Database`, {field_type} AS `Field Type`, {field_name} AS `Field` FROM information_schema.{table_name} WHERE {condition} """ def _make_select(objtype, pattern, database_pattern, check_body, use_regexp): """Generate a SELECT statement for finding an object. """ options = { 'pattern': obj2sql(pattern), 'regex': 'REGEXP' if use_regexp else 'LIKE', 'select_option': '', 'field_type': "'" + objtype.upper() + "'", } try: options.update(_OBJMAP[objtype]) except KeyError: raise UtilError("Invalid object type '{0}'. Use --help to see allowed " "values.".format(objtype)) # Build a condition for inclusion in the select condition = "{field_name} {regex} {pattern}".format(**options) if check_body and "body_field" in options: condition += " OR {body_field} {regex} {pattern}".format(**options) if database_pattern: options['database_pattern'] = obj2sql(database_pattern) condition = ("({0}) AND {schema_field} {regex} {database_pattern}" "".format(condition, **options)) options['condition'] = condition return _SELECT_TYPE_FRM.format(**options) def _spec(info): """Create a server specification string from an info structure. """ result = "%(user)s:*@%(host)s:%(port)s" % info if "unix_socket" in info: result += ":" + info["unix_socket"] return result def _join_words(words, delimiter=",", conjunction="and"): """Join words together for nice printout. >>> _join_words(["first", "second", "third"]) 'first, second, and third' >>> _join_words(["first", "second"]) 'first and second' >>> _join_words(["first"]) 'first' """ if len(words) == 1: return words[0] elif len(words) == 2: return ' {0} '.format(conjunction).join(words) else: return '{0} '.format(delimiter).join(words[0:-1]) + \ "%s %s %s" % (delimiter, conjunction, words[-1]) ROUTINE = 'routine' EVENT = 'event' TRIGGER = 'trigger' TABLE = 'table' DATABASE = 'database' VIEW = 'view' USER = 'user' COLUMN = 'column' OBJECT_TYPES = _OBJMAP.keys() class ObjectGrep(object): """Grep for objects """ def __init__(self, pattern, database_pattern=None, types=OBJECT_TYPES, check_body=False, use_regexp=False): """Constructor pattern[in] pattern to match database_pattern[in] database pattern to match (if present) default - None = do not match database types[in] list of object types to search check_body[in] if True, search body of routines default = False use_regexp[in] if True, use regexp for compare default = False """ stmts = [_make_select(t, pattern, database_pattern, check_body, use_regexp) for t in types] self.__sql = _GROUP_MATCHES_FRM.format("UNION".join(stmts)) # Need to save the pattern for informative error messages later self.__pattern = pattern self.__types = types def sql(self): """Get the SQL command Returns string - SQL statement """ return self.__sql def execute(self, connections, output=sys.stdout, connector=mysql.connector, **kwrds): """Execute the search for objects This method searches for objects that match a search criteria for one or more servers. connections[in] list of connection parameters output[in] file stream to display information default = sys.stdout connector[in] connector to use default = mysql.connector kwrds[in] dictionary of options format format for display default = GRID """ fmt = kwrds.get('format', "grid") charset = kwrds.get('charset', None) ssl_opts = kwrds.get('ssl_opts', {}) entries = [] for info in connections: conn = parse_connection(info) if not conn: msg = "'%s' is not a valid connection specifier" % (info,) raise FormatError(msg) if charset: conn['charset'] = charset info = conn conn['host'] = conn['host'].replace("[", "") conn['host'] = conn['host'].replace("]", "") if connector == mysql.connector: set_ssl_opts_in_connection_info(ssl_opts, info) connection = connector.connect(**info) if not charset: # If no charset provided, get it from the # "character_set_client" server variable. cursor = connection.cursor() cursor.execute("SHOW VARIABLES LIKE 'character_set_client'") res = cursor.fetchall() connection.set_charset_collation(charset=str(res[0][1])) cursor.close() cursor = connection.cursor() cursor.execute(self.__sql) entries.extend([tuple([_spec(info)] + list(row)) for row in cursor]) headers = ["Connection"] headers.extend(col[0].title() for col in cursor.description) if len(entries) > 0 and output: print_list(output, fmt, headers, entries) else: msg = "Nothing matches '%s' in any %s" % \ (self.__pattern, _join_words(self.__types, conjunction="or")) raise EmptyResultError(msg) mysql-utilities-1.6.4/mysql/utilities/command/indexcheck.py0000755001577100752670000001624712747670311023664 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the check index utility. It is used to check for duplicate or redundant indexes for a list of database (operates on all tables in each database), a list of tables in the for db.table, or all tables in all databases except internal databases. """ from mysql.utilities.exception import UtilError from mysql.utilities.common.server import connect_servers from mysql.utilities.common.database import Database from mysql.utilities.common.options import PARSE_ERR_OBJ_NAME_FORMAT from mysql.utilities.common.pattern_matching import parse_object_name from mysql.utilities.common.table import Table from mysql.utilities.common.sql_transform import (is_quoted_with_backticks, quote_with_backticks, remove_backtick_quoting) def check_index(src_val, table_args, options): """Check for duplicate or redundant indexes for one or more tables This method will examine the indexes for one or more tables and identify any indexes that are potential duplicates or redundant. It prints the equivalent DROP statements if selected. src_val[in] a dictionary containing connection information for the source including: (user, password, host, port, socket) table_args[in] list of tables in the form 'db.table' or 'db' options[in] dictionary of options to include: show-drops : show drop statements for dupe indexes skip : skip non-existent tables verbosity : print extra information show-indexes : show all indexes for each table index-format : index format = sql, table, tab, csv worst : show worst performing indexes best : show best performing indexes report-indexes : reports tables without PK or UK Returns bool True = success, raises UtilError if error """ # Get options show_drops = options.get("show-drops", False) skip = options.get("skip", False) verbosity = options.get("verbosity", False) show_indexes = options.get("show-indexes", False) index_format = options.get("index-format", False) stats = options.get("stats", False) first_indexes = options.get("best", None) last_indexes = options.get("worst", None) report_indexes = options.get("report-indexes", False) # Try to connect to the MySQL database server. conn_options = { 'quiet': verbosity == 1, 'version': "5.0.0", } servers = connect_servers(src_val, None, conn_options) source = servers[0] db_list = [] # list of databases table_list = [] # list of all tables to process # Build a list of objects to process # 1. start with db_list if no objects present on command line # 2. process command line options. # 3. loop through database list and add all tables # 4. check indexes # Get sql_mode value set on servers sql_mode = source.select_variable("SQL_MODE") # Perform the options check here. Loop through objects presented. for obj in table_args: m_obj = parse_object_name(obj, sql_mode) # Check if a valid database/table name is specified. if m_obj[0] is None: raise UtilError(PARSE_ERR_OBJ_NAME_FORMAT.format( obj_name=obj, option="the database/table arguments")) else: db_name, obj_name = m_obj if obj_name: # Table specified table_list.append(obj) # Else we are operating on a specific database. else: # Remove backtick quotes. db_name = remove_backtick_quoting(db_name, sql_mode) \ if is_quoted_with_backticks(db_name, sql_mode) else db_name db_list.append(db_name) # Loop through database list adding tables for db in db_list: db_source = Database(source, db) db_source.init() tables = db_source.get_db_objects("TABLE") if not tables and verbosity >= 1: print "# Warning: database %s does not exist. Skipping." % (db) for table in tables: table_list.append("{0}.{1}".format(quote_with_backticks(db, sql_mode), quote_with_backticks(table[0], sql_mode))) # Fail if no tables to check if not table_list: raise UtilError("No tables to check.") if verbosity > 1: print "# Checking indexes..." # Check indexes for each table in the list for table_name in table_list: tbl_options = { 'verbose': verbosity >= 1, 'get_cols': False, 'quiet': verbosity is None or verbosity < 1 } tbl = Table(source, table_name, tbl_options) exists = tbl.exists() if not exists and not skip: raise UtilError("Table %s does not exist. Use --skip " "to skip missing tables." % table_name) if exists: if not tbl.get_indexes(): if verbosity > 1 or report_indexes: print "# Table %s is not indexed." % (table_name) else: if show_indexes: tbl.print_indexes(index_format, verbosity) # Show if table has primary key if verbosity > 1 or report_indexes: if not tbl.has_primary_key(): if not tbl.has_unique_key(): print("# Table {0} does not contain neither a " "PRIMARY nor UNIQUE key.".format(table_name)) else: print("# Table {0} does not contain a PRIMARY key." "".format(table_name)) tbl.check_indexes(show_drops) # Show best and/or worst indexes if stats: if first_indexes is not None: tbl.show_special_indexes(index_format, first_indexes, True) if last_indexes is not None: tbl.show_special_indexes(index_format, last_indexes) if verbosity > 1: print "#" if verbosity > 1: print "# ...done." mysql-utilities-1.6.4/mysql/utilities/command/grants.py0000644001577100752670000002570712747670311023053 0ustar pb2usercommon# # Copyright (c) 2014, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the commands to show the grantees and respective grants over a set of objects. """ from collections import defaultdict from mysql.utilities.common.database import Database from mysql.utilities.common.grants_info import (DATABASE_TYPE, ROUTINE_TYPE, TABLE_TYPE, get_grantees, filter_grants, DATABASE_LEVEL, OBJECT_LEVEL, GLOBAL_LEVEL) from mysql.utilities.common.messages import ERROR_USER_WITHOUT_PRIVILEGES from mysql.utilities.common.server import connect_servers from mysql.utilities.common.sql_transform import (is_quoted_with_backticks, quote_with_backticks) from mysql.utilities.common.tools import join_and_build_str from mysql.utilities.common.user import User from mysql.utilities.exception import UtilError def _check_privileges(server): """Verify required privileges to check grantee privileges. server[in] Instance of Server class. This method checks if the used User for the server possesses the required privileges get the list of grantees and respective grants for the objects. Specifically, the following privilege is required: SELECT on mysql.* An exception is thrown if the user doesn't have this privilege. """ user_obj = User(server, "{0}@{1}".format(server.user, server.host)) has_privilege = user_obj.has_privilege('mysql', '*', 'SELECT') if not has_privilege: raise UtilError(ERROR_USER_WITHOUT_PRIVILEGES.format( user=server.user, host=server.host, port=server.port, operation='read the available grants', req_privileges="SELECT on mysql.*" )) def validate_obj_type_dict(server, obj_type_dict): """Validates the dictionary of objects against the specified server This function builds a dict with the types of the objects in obj_type_dict, filtering out non existing databases and objects. Returns a dictionary with only the existing objects, using object_types as keys and as values a list of tuples (, ). """ valid_obj_dict = defaultdict(list) server_dbs = set(row[0] for row in server.get_all_databases( ignore_internal_dbs=False)) argument_dbs = set(obj_type_dict.keys()) # Get non existing_databases and dbs to check non_existing_dbs = argument_dbs.difference(server_dbs) dbs_to_check = server_dbs.intersection(argument_dbs) if non_existing_dbs: if len(non_existing_dbs) > 1: plurals = ('s', '', 'them') else: plurals = ('', 'es', 'it') print('# WARNING: specified database{0} do{1} not ' 'exist on base server and will be skipped along ' 'any tables and routines belonging to {2}: ' '{3}.'.format(plurals[0], plurals[1], plurals[2], ", ".join(non_existing_dbs))) # Get sql_mode value set on servers sql_mode = server.select_variable("SQL_MODE") # Now for each db that actually exists, get the type of the specified # objects for db_name in dbs_to_check: db = Database(server, db_name) # quote database name if necessary quoted_db_name = db_name if not is_quoted_with_backticks(db_name, sql_mode): quoted_db_name = quote_with_backticks(db_name, sql_mode) # if wilcard (db.*) is used add all supported objects of the database if '*' in obj_type_dict[db_name]: obj_type_dict[db_name] = obj_type_dict[db_name] - set('*') tables = (table[0] for table in db.get_db_objects('TABLE')) obj_type_dict[db_name] = obj_type_dict[db_name] | set(tables) procedures = (proc[0] for proc in db.get_db_objects('PROCEDURE')) obj_type_dict[db_name] = obj_type_dict[db_name] | set(procedures) functions = (proc[0] for proc in db.get_db_objects('FUNCTION')) obj_type_dict[db_name] = obj_type_dict[db_name] | set(functions) for obj_name in obj_type_dict[db_name]: if obj_name is None: # We must consider the database itself valid_obj_dict[DATABASE_TYPE].append((quoted_db_name, quoted_db_name)) else: # get quoted name for obj_name quoted_obj_name = obj_name if not is_quoted_with_backticks(obj_name, sql_mode): quoted_obj_name = quote_with_backticks(obj_name, sql_mode) # Test if the object exists and if it does, test if it # is one of the supported object types, else # print a warning and skip the object obj_type = db.get_object_type(obj_name) if obj_type is None: print("# WARNING: specified object does not exist. " "{0}.{1} will be skipped." "".format(quoted_db_name, quoted_obj_name)) elif 'PROCEDURE' in obj_type or 'FUNCTION' in obj_type: valid_obj_dict[ROUTINE_TYPE].append((quoted_db_name, quoted_obj_name)) elif 'TABLE' in obj_type: valid_obj_dict[TABLE_TYPE].append((quoted_db_name, quoted_obj_name)) else: print('# WARNING: specified object is not supported ' '(not a DATABASE, FUNCTION, PROCEDURE or TABLE),' ' as such it will be skipped: {0}.{1}.' ''.format(quoted_db_name, quoted_obj_name)) return valid_obj_dict def check_grants(server_cnx_val, options, dict_of_objects): """Show list of privileges over a set of objects This function creates a GrantShow object which shows the list of users with (the optionally specified list of ) privileges over the specified set of objects. server_cnx_val[in] Dictionary with the connection values to the server. options[in] Dictionary of options (verbosity, privileges, show_mode). dict_of_objects[in] Dictionary of objects (set of databases, tables and procedures) by database to check. """ # Create server connection: server = connect_servers(server_cnx_val, None, options)[0] # Check user permissions to consult the grant information. _check_privileges(server) # Validate the dict of objects against our server. valid_dict_of_objects = validate_obj_type_dict(server, dict_of_objects) # Get optional list of required privileges req_privs = set(options['privileges']) if options['privileges'] else None # Get hide_inherit_grants option inherit_level_str = options['inherit_level'].lower() inherit_level = {"global": GLOBAL_LEVEL, "database": DATABASE_LEVEL, "object": OBJECT_LEVEL}[inherit_level_str] # If we specify some privileges that are not valid for all the objects # print warning message stating that some will be ignored. if req_privs: for obj_type in valid_dict_of_objects: # get list of privileges that applies to the object type filtered_req_privs = filter_grants(req_privs, obj_type) # if the size of the set is different that means that some of the # privileges cannot be applied to this object type, print warning if len(filtered_req_privs) != len(req_privs): if obj_type.upper() == DATABASE_TYPE: obj_lst = [obj_tpl[0] for obj_tpl in valid_dict_of_objects[obj_type]] else: obj_lst = [".".join(obj_tpl) for obj_tpl in valid_dict_of_objects[obj_type]] obj_lst_str = join_and_build_str(sorted(obj_lst)) missing_privs = sorted(req_privs - filtered_req_privs) priv_str = join_and_build_str(missing_privs) verb = "do" if len(missing_privs) > 1 else "does" print("# WARNING: {0} {1} not apply to {2}s " "and will be ignored for: {3}.".format( priv_str, verb, obj_type.lower(), obj_lst_str)) # get the grantee information dictionary grantee_info_dict = get_grantees(server, valid_dict_of_objects, req_privileges=req_privs, inherit_level=inherit_level) # Print the information obj_type_lst = [DATABASE_TYPE, TABLE_TYPE, ROUTINE_TYPE] for obj_type in obj_type_lst: if obj_type in grantee_info_dict: # Sort by object name for obj_name in sorted(grantee_info_dict[obj_type]): print("\n# {0} {1}:".format(obj_type, obj_name)) if options['show_mode'] == 'users': # Sort by grantee name output_str = ", ".join( sorted(grantee_info_dict[obj_type][obj_name].keys())) print("# - {0}".format(output_str)) elif options['show_mode'] == 'user_grants': # Sort by grantee name for grantee, priv_set in sorted( grantee_info_dict[obj_type][obj_name].iteritems()): # print privileges sorted by name print("# - {0} : {1}".format( grantee, ", ".join(sorted(priv_set)))) else: # raw mode # Sort by grantee name for grantee in sorted( grantee_info_dict[obj_type][obj_name].keys()): user = User(server, grantee) grant_stms = sorted( user.get_grants_for_object(obj_name, obj_type)) if grant_stms: print("# - For {0}".format(grantee)) for grant_stm in grant_stms: print("{0}".format(grant_stm)) mysql-utilities-1.6.4/mysql/utilities/command/check_rpl.py0000644001577100752670000003313412747670311023500 0ustar pb2usercommon# # Copyright (c) 2010, 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the check replication functionality to verify a replication setup. """ from mysql.utilities.exception import UtilRplError, UtilRplWarn from mysql.utilities.common.server import connect_servers from mysql.utilities.common.replication import Replication, MasterInfo _PRINT_WIDTH = 75 _RPL_HOST, _RPL_USER = 1, 2 def _get_replication_tests(rpl, options): """Return list of replication test function pointers. This list can be used to iterate over the replication tests for ensuring a properly configured master and slave topology. """ return [ _TestMasterBinlog(rpl, options), _TestBinlogExceptions(rpl, options), _TestRplUser(rpl, options), _TestServerIds(rpl, options), _TestUUIDs(rpl, options), _TestSlaveConnection(rpl, options), _TestMasterInfo(rpl, options), _TestInnoDB(rpl, options), _TestStorageEngines(rpl, options), _TestLCTN(rpl, options), _TestSlaveBehindMaster(rpl, options), ] def check_replication(master_vals, slave_vals, options): """Check replication among a master and a slave. master_vals[in] Master connection in form: user:passwd@host:port:socket or login-path:port:socket slave_vals[in] Slave connection in form user:passwd@host:port:socket or login-path:port:socket options[in] dictionary of options (verbosity, quiet, pedantic) Returns bool - True if all tests pass, False if errors, warnings, failures """ quiet = options.get("quiet", False) width = options.get("width", 75) slave_status = options.get("slave_status", False) test_errors = False conn_options = { 'quiet': quiet, 'src_name': "master", 'dest_name': 'slave', 'version': "5.0.0", 'unique': True, } certs_paths = {} if 'ssl_ca' in dir(options) and options.ssl_ca is not None: certs_paths['ssl_ca'] = options.ssl_ca if 'ssl_cert' in dir(options) and options.ssl_cert is not None: certs_paths['ssl_cert'] = options.ssl_cert if 'ssl_key' in dir(options) and options.ssl_key is not None: certs_paths['ssl_key'] = options.ssl_key conn_options.update(certs_paths) servers = connect_servers(master_vals, slave_vals, conn_options) rpl_options = options.copy() rpl_options['verbosity'] = options.get("verbosity", 0) > 0 # Create an instance of the replication object rpl = Replication(servers[0], servers[1], rpl_options) if not quiet: print "Test Description", print ' ' * (width - 24), print "Status" print '-' * width for test in _get_replication_tests(rpl, options): if not test.exec_test(): test_errors = True if slave_status and not quiet: try: print "\n#\n# Slave status: \n#" rpl.slave.show_status() except UtilRplError, e: print "ERROR:", e.errmsg if not quiet: print "# ...done." return test_errors class _BaseTestReplication(object): """ The _BaseTestReplication class can be used to determine if two servers are correctly configured for replication. This class provides a rpl_test() method which can be overridden to execute specific tests. """ def __init__(self, rpl, options): """Constructor rpl[in] Replicate class instance options[in] dictionary of options to include width, verbosity, pedantic, quiet """ self.options = options self.verbosity = options.get("verbosity", 0) self.quiet = options.get("quiet", False) self.suppress = options.get("suppress", False) self.width = options.get("width", _PRINT_WIDTH) self.rpl = rpl self.description = "" # Users must set this. self.warning = False def report_test(self, description): """Print the test category description[in] description of test """ self.description = description if not self.quiet: print self.description[0:self.width - 9], print ' ' * (self.width - len(self.description) - 8), def report_status(self, state, errors): """Print the results of a test. state[in] state of the test errors[in] list of errors Returns bool - True if errors detected during epilog reporting. """ if not self.quiet: print "[%s]" % state if type(errors) == list and len(errors) > 0: print for error in errors: print error print res = False if state == "pass": # Only execute epilog if test passes. try: self.report_epilog() except UtilRplError, e: print "ERROR:", e.errmsg res = True return res def rpl_test(self): """Execute replication test. Override this method to provide specific tests for replication. For example, checking that binary log is turn on for the master. This method returns a list of strings containing test-specific errors or an empty list to indicate a test has passed. Note: Do not include newline characters on error message strings. To create a suite of tests, create a method that returns a list of function pointers to this method of each derived class. See the method _get_replication_tests() above for an example. """ pass def report_epilog(self): """Execute post-test reporting. Override this method for post-test reporting. """ pass def exec_test(self): """Execute a test for replication prerequisites This method will report the test to be run, execute the test, and if the result is None, report the test as 'pass' else report the test as 'FAIL' and print the error messages. If warning is set, the method will report 'WARN' instead of 'FAIL' and print the error messages. Should the test method raise an error, the status is set to 'FAIL' and the exception is reported. Returns bool True if test passes, False if warning or failure """ try: res = self.rpl_test() # Any errors raised is a failed test. except UtilRplError, e: if not self.quiet: self.report_status("FAIL", [e.errmsg]) else: print "Test: %s failed. Error: %s" % (self.description, e.errmsg) return False # Check for warnings except UtilRplWarn, e: if not self.quiet: self.report_status("WARN", [e.errmsg]) else: print "Test: %s had warnings. %s" % (self.description, e.errmsg) return False # Check to see if test passed or if there were errors returned. if (type(res) == list and res == []) or \ (type(res) == bool and res): return not self.report_status("pass", []) else: if self.warning: if not self.suppress: if not self.quiet: self.report_status("WARN", res) else: print "WARNING:", self.description for error in res: print error elif not self.quiet: self.report_status("WARN", res) else: self.report_status("FAIL", res) return False return True class _TestMasterBinlog(_BaseTestReplication): """Test master has binlog enabled. """ def rpl_test(self): """Execute test. """ # Check master for binary logging self.report_test("Checking for binary logging on master") return self.rpl.check_master_binlog() class _TestBinlogExceptions(_BaseTestReplication): """Test for binary log exceptions. """ def rpl_test(self): """Execute test. """ # Check binlog exceptions self.warning = True self.report_test("Are there binlog exceptions?") return self.rpl.get_binlog_exceptions() class _TestRplUser(_BaseTestReplication): """Test replication user permissions. """ def rpl_test(self): """Execute test. """ # Check rpl_user self.report_test("Replication user exists?") res = self.rpl.slave.get_status() if res is None or res == []: raise UtilRplError("Slave is not connected to a master.") return self.rpl.master.check_rpl_user(res[0][_RPL_USER], self.rpl.slave.host) class _TestServerIds(_BaseTestReplication): """Test server ids are different. """ def rpl_test(self): """Execute test. """ # Check server ids self.report_test("Checking server_id values") return self.rpl.check_server_ids() def report_epilog(self): """Report server_ids. """ if self.verbosity > 0 and not self.quiet: master_id = self.rpl.master.get_server_id() slave_id = self.rpl.slave.get_server_id() print "\n master id = %s" % master_id print " slave id = %s\n" % slave_id class _TestUUIDs(_BaseTestReplication): """Test server uuids are different. """ def rpl_test(self): """Execute test. """ # Check server ids self.report_test("Checking server_uuid values") return self.rpl.check_server_uuids() def report_epilog(self): """Report server_ids. """ if self.verbosity > 0 and not self.quiet: master_uuid = self.rpl.master.get_server_uuid() slave_uuid = self.rpl.slave.get_server_uuid() print "\n master uuid = %s" % \ (master_uuid if master_uuid is not None else "Not " "supported.") print " slave uuid = %s\n" % \ (slave_uuid if slave_uuid is not None else "Not supported.") class _TestSlaveConnection(_BaseTestReplication): """Test whether slave can connect or is connected to the master. """ def rpl_test(self): """Execute test. """ # Check slave connection self.warning = True self.report_test("Is slave connected to master?") return self.rpl.check_slave_connection() class _TestMasterInfo(_BaseTestReplication): """Ensure master info file matches slave connection. """ def rpl_test(self): """Execute test. """ # Check master.info file self.warning = True m_info = MasterInfo(self.rpl.slave, self.options) self.report_test("Check master information file") return m_info.check_master_info() def report_epilog(self): """Report master info contents. """ if self.verbosity > 0 and not self.quiet: m_info = MasterInfo(self.rpl.slave, self.options) print "\n#\n# Master information file: \n#" m_info.show_master_info() print class _TestInnoDB(_BaseTestReplication): """Test InnoDB compatibility. """ def rpl_test(self): """Execute test. """ # Check InnoDB compatibility self.report_test("Checking InnoDB compatibility") return self.rpl.check_innodb_compatibility(self.options) class _TestStorageEngines(_BaseTestReplication): """Test storage engines lists such that slave has the same storage engines as the master. """ def rpl_test(self): """Execute test. """ # Checking storage engines self.report_test("Checking storage engines compatibility") return self.rpl.check_storage_engines(self.options) class _TestLCTN(_BaseTestReplication): """Test the LCTN settings of master and slave. """ def rpl_test(self): """Execute test. """ # Check lctn self.warning = True self.report_test("Checking lower_case_table_names settings") return self.rpl.check_lctn() def report_epilog(self): """Report lctn settings. """ if self.verbosity > 0 and not self.quiet: slave_lctn = self.rpl.slave.get_lctn() master_lctn = self.rpl.master.get_lctn() print "\n Master lower_case_table_names: %s" % master_lctn print " Slave lower_case_table_names: %s\n" % slave_lctn class _TestSlaveBehindMaster(_BaseTestReplication): """Test for slave being behind master. """ def rpl_test(self): """Execute test. """ # Check slave behind master self.report_test("Checking slave delay (seconds behind master)") return self.rpl.check_slave_delay() mysql-utilities-1.6.4/mysql/utilities/command/setup_rpl.py0000755001577100752670000001037112747670311023564 0ustar pb2usercommon# # Copyright (c) 2010, 2014, Oracle and/or its affiliates. All rights # reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the replicate utility. It is used to establish a master/slave replication topology among two servers. """ from mysql.utilities.exception import UtilError from mysql.utilities.common.server import connect_servers from mysql.utilities.common.replication import Replication from mysql.utilities.common.replication_ms import ReplicationMultiSource def setup_replication(master_vals, slave_vals, rpl_user, options, test_db=None): """Setup replication among a master and a slave. master_vals[in] Master connection in form user:passwd@host:port:sock slave_vals[in] Slave connection in form user:passwd@host:port:sock rpl_user[in] Replication user in the form user:passwd options[in] dictionary of options (verbosity, quiet, pedantic) test_db[in] Test replication using this database name (optional) default = None """ verbosity = options.get("verbosity", 0) conn_options = { 'src_name': "master", 'dest_name': 'slave', 'version': "5.0.0", 'unique': True, } servers = connect_servers(master_vals, slave_vals, conn_options) master = servers[0] slave = servers[1] rpl_options = options.copy() rpl_options['verbosity'] = verbosity > 0 # Create an instance of the replication object rpl = Replication(master, slave, rpl_options) errors = rpl.check_server_ids() for error in errors: print error # Check for server_id uniqueness if verbosity > 0: print "# master id = %s" % master.get_server_id() print "# slave id = %s" % slave.get_server_id() errors = rpl.check_server_uuids() for error in errors: print error # Check for server_uuid uniqueness if verbosity > 0: print "# master uuid = %s" % master.get_server_uuid() print "# slave uuid = %s" % slave.get_server_uuid() # Check InnoDB compatibility if verbosity > 0: print "# Checking InnoDB statistics for type and version conflicts." errors = rpl.check_innodb_compatibility(options) for error in errors: print error # Checking storage engines if verbosity > 0: print "# Checking storage engines..." errors = rpl.check_storage_engines(options) for error in errors: print error # Check master for binary logging print "# Checking for binary logging on master..." errors = rpl.check_master_binlog() if not errors == []: raise UtilError(errors[0]) # Setup replication print "# Setting up replication..." if not rpl.setup(rpl_user, 10): raise UtilError("Cannot setup replication.") # Test the replication setup. if test_db: rpl.test(test_db, 10) print "# ...done." def start_ms_replication(slave_vals, masters_vals, options): """Setup replication among a slave and multiple masters. slave_vals[in] Slave server connection dictionary. master_vals[in] List of master server connection dictionaries. options[in] Options dictionary. """ rplms = ReplicationMultiSource(slave_vals, masters_vals, options) daemon = options.get("daemon", None) if daemon == "start": rplms.start() elif daemon == "stop": rplms.stop() elif daemon == "restart": rplms.restart() else: try: # Start in foreground rplms.start(detach_process=False) except KeyboardInterrupt: # Stop multi-source replication rplms.stop_replication() mysql-utilities-1.6.4/mysql/utilities/command/dbimport.py0000644001577100752670000021447212747670311023374 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the import operations that will import object metadata or table data. """ import csv import re import sys from collections import defaultdict from mysql.utilities.exception import UtilError, UtilDBError from mysql.utilities.common.database import Database from mysql.utilities.common.options import check_engine_options from mysql.utilities.common.pattern_matching import parse_object_name from mysql.utilities.common.table import Table from mysql.utilities.common.server import connect_servers from mysql.utilities.common.sql_transform import (quote_with_backticks, is_quoted_with_backticks, to_sql) # List of database objects for enumeration _DATA_DECORATE = "DATA FOR TABLE" _DATABASE, _TABLE, _VIEW, _TRIG, _PROC, _FUNC, _EVENT, _GRANT = "DATABASE", \ "TABLE", "VIEW", "TRIGGER", "PROCEDURE", "FUNCTION", "EVENT", "GRANT" _IMPORT_LIST = [_TABLE, _VIEW, _TRIG, _PROC, _FUNC, _EVENT, _GRANT, _DATA_DECORATE] _DEFINITION_LIST = [_TABLE, _VIEW, _TRIG, _PROC, _FUNC, _EVENT, _GRANT] _BASIC_COMMANDS = ["CREATE", "USE", "GRANT", "DROP", "SET"] _DATA_COMMANDS = ["INSERT", "UPDATE"] _RPL_COMMANDS = ["START", "STOP", "CHANGE"] _RPL_PREFIX = "-- " _RPL = len(_RPL_PREFIX) _SQL_LOG_BIN_CMD = "SET @@SESSION.SQL_LOG_" _GTID_COMMANDS = ["SET @MYSQLUTILS_TEMP_L", _SQL_LOG_BIN_CMD, "SET @@GLOBAL.GTID_PURG"] _GTID_PREFIX = 22 _GTID_SKIP_WARNING = ("# WARNING: GTID commands are present in the import " "file but the server does not support GTIDs. Commands " "are ignored.") _GTID_MISSING_WARNING = ("# WARNING: GTIDs are enabled on this server but the " "import file did not contain any GTID commands.") def _read_row(file_h, fmt, skip_comments=False): """Read a row of from the file. This method reads the file attempting to read and translate the data based on the format specified. file_h[in] Opened file handle fmt[in] One of SQL,CSV,TAB,GRID,or VERTICAL skip_comments[in] If True, do not return lines starting with '#' Returns (tuple) - one row of data """ warnings_found = [] if fmt == "sql": # Easiest - just read a row and return it. for row in file_h.readlines(): if row.startswith("# WARNING"): warnings_found.append(row) continue if not (row.startswith('#') or row.startswith('--')): # Handle multi-line statements (do not strip). # Note: delimiters have to be handled outside this function. if len(row.strip()) == 0: yield '' # empty row else: yield row # do not strip (can be multi-line) elif fmt == "vertical": # This format is a bit trickier. We need to read a set of rows that # encompass the data row. They will appear in this format: # ******
****** # col_a: value_a # col_b: value_b # ... # # Thus, we must read until the next header then return a tuple # containing all of the values from the right. We also need to # return an initial row with the column names on the left. write_header = False read_header = False header = [] data_row = [] for row in file_h.readlines(): # Show warnings from file if row.startswith("# WARNING"): warnings_found.append(row) continue # Process replication commands if row[0:_RPL] == _RPL_PREFIX: # find first word first_word = row[_RPL:row.find(' ', _RPL)].upper() if first_word in _RPL_COMMANDS: yield [row.strip('\n')] continue # Check for GTID commands elif len(row) > _GTID_PREFIX + _RPL and \ row[_RPL:_GTID_PREFIX + _RPL] in _GTID_COMMANDS: yield [row.strip('\n')] continue # Skip comment rows if row[0] == '#': if len(header) > 0: yield header header = [] if len(data_row) > 0: yield data_row data_row = [] if skip_comments: continue else: new_row = [row] yield new_row continue # If we find a header, and we've already read data, return the # row else this is the first header so we ignore it. if row[0] == '*': if row.find(" 1. row") > 0: read_header = True continue else: write_header = True read_header = False if write_header: write_header = False if len(header) > 0: yield header header = [] if len(data_row) > 0: yield data_row data_row = [] continue # Now, split the data into column header and column data # Saving column header for first row field = row.split(":") if len(field) == 2: if read_header: header.append(field[0].strip()) # strip \n from lines data_row.append(field[1][0:len(field[1]) - 1].strip()) elif len(field) == 4: # date field! if read_header: header.append(field[0].strip()) date_str = "%s:%s:%s" % (field[1], field[2], field[3].strip()) data_row.append(date_str) if len(data_row) > 0: yield data_row else: separator = "," # Use CSV reader to read the row if fmt == "csv": separator = "," elif fmt == "tab": separator = "\t" elif fmt == "grid": separator = "|" csv_reader = csv.reader(file_h, delimiter=separator) for row in csv_reader: # Ignore empty lines if not row: continue if row[0].startswith("# WARNING"): warnings_found.append(row[0]) continue # find first word if row[0][0:_RPL] == _RPL_PREFIX: first_word = \ row[0][_RPL:_RPL + row[0][_RPL:].find(' ')].upper() else: first_word = "" # Process replication commands if row[0][0:_RPL] == _RPL_PREFIX: if first_word in _RPL_COMMANDS: yield row # Check for GTID commands elif len(row[0]) > _GTID_PREFIX + _RPL and \ row[0][_RPL:_GTID_PREFIX + _RPL] in _GTID_COMMANDS: yield row elif fmt == "grid": if len(row[0]) > 0: if row[0][0] == '+': continue elif (row[0][0] == '#' or row[0][0:2] == "--") and \ not skip_comments: yield row else: new_row = [] for col in row[1:len(row) - 1]: new_row.append(col.strip()) yield new_row else: if (len(row[0]) == 0 or row[0][0] != '#' or row[0][0:2] != "--") or ((row[0][0] == '#' or row[0][0:2] == "--") and not skip_comments): yield row if warnings_found: print("CAUTION: The following warning messages were included in " "the import file:".format(len(warnings_found))) for row in warnings_found: print(row.strip('\n')) def _check_for_object_list(row, obj_type): """Check to see if object is in the list of valid objects. row[in] A row containing an object obj_type[in] Object type to find Returns (bool) - True = object is obj_type False = object is not obj_type """ if row[0:len(obj_type) + 2].upper() == "# %s" % obj_type: if row.find("none found") < 0: return True else: return False else: return False def read_next(file_h, fmt): """Read properly formatted import file and return the statements. This method reads the next object from the file returning a tuple containing the type of object - either a definition, SQL statement, or the beginning of data rows and the actual data from the file. It uses the _read_row() method to read the file returning either a list of SQL commands (i.e. from a --format=SQL file) or a list of the data from the file (_read_row() converts all non-SQL formatted files into lists). This allows the caller to request an object block at a time from the file without knowing the format of the file. file_h[in] Opened file handle fmt[in] One of SQL,CSV,TAB,GRID,or VERTICAL Returns (tuple) - ('SQL'|'DATA'|'BEGIN_DATA'|'', ) """ cmd_type = "" multiline = False delimiter = ';' skip_next_line = False first_occurrence = True previous_cmd_type = None if fmt == "sql": sql_cmd = "" for row in _read_row(file_h, "sql", True): first_word = row[0:row.find(' ')].upper() # find first word stripped_row = row.strip() # Avoid repeating strip() operation. # Skip these nonsense rows. if len(row) == 0 or row[0] == "#"or row[0:2] == "||": continue # Handle DELIMITER elif stripped_row.upper().startswith('DELIMITER'): if len(sql_cmd) > 0: # Yield previous SQL command. yield (cmd_type, sql_cmd) sql_cmd = '' # Reset SQL command (i.e. remove DELIMITER). # Get delimiter from statement "DELIMITER ". delimiter = stripped_row[10:] cmd_type = "sql" # Enable/disable multi-line according to the found delimiter. if delimiter != ';': multiline = True else: multiline = False elif multiline and stripped_row.endswith(delimiter): # Append last line to previous multi-line SQL and retrieve it, # removing trailing whitespaces and delimiter. sql_cmd = "{0}{1}".format(sql_cmd, row.rstrip()[0:-len(delimiter)]) yield (cmd_type, sql_cmd) sql_cmd = '' elif multiline: # Save multiple line statements. sql_cmd = "{0}{1}".format(sql_cmd, row) # Identify specific statements (command types). elif (len(row) > _GTID_PREFIX and row[0:_GTID_PREFIX] in _GTID_COMMANDS): # Remove trailing whitespaces and delimiter. sql_cmd = sql_cmd.rstrip()[0:-len(delimiter)] if len(sql_cmd) > 0: # Yield previous SQL command. yield (cmd_type, sql_cmd) cmd_type = "GTID_COMMAND" sql_cmd = row elif first_word in _BASIC_COMMANDS: # Remove trailing whitespaces and delimiter. sql_cmd = sql_cmd.rstrip()[0:-len(delimiter)] if len(sql_cmd) > 0: # Yield previous sql command. yield (cmd_type, sql_cmd) cmd_type = "sql" sql_cmd = row elif first_word in _RPL_COMMANDS: # Remove trailing whitespaces and delimiter. sql_cmd = sql_cmd.rstrip()[0:-len(delimiter)] if len(sql_cmd) > 0: # Yield previous SQL command. yield (cmd_type, sql_cmd) cmd_type = "RPL_COMMAND" sql_cmd = row elif first_word in _DATA_COMMANDS: # Remove trailing whitespaces and delimiter. sql_cmd = sql_cmd.rstrip()[0:-len(delimiter)] if len(sql_cmd) > 0: # Yield previous sql command. yield (cmd_type, sql_cmd) cmd_type = "DATA" sql_cmd = row # If does not match previous conditions but ends with the delimiter # then return the current SQL command. elif stripped_row.endswith(delimiter): # First, yield previous SQL command if it ends with delimiter. if sql_cmd.strip().endswith(delimiter): yield (cmd_type, sql_cmd.rstrip()[0:-len(delimiter)]) sql_cmd = '' # Then, append SQL command to previous and retrieve it. sql_cmd = "{0}{1}".format(sql_cmd, row.rstrip()[0:-len(delimiter)]) # Yield current SQL command. yield (cmd_type, sql_cmd) sql_cmd = '' # If does not end with the delimiter then append the SQL command. else: sql_cmd = "{0}{1}".format(sql_cmd, row) # Remove trailing whitespaces and delimiter from last line. sql_cmd = sql_cmd.rstrip()[0:-len(delimiter)] yield (cmd_type, sql_cmd) # Need last row. elif fmt == "raw_csv": csv_reader = csv.reader(file_h, delimiter=",") for row in csv_reader: if row: yield row else: found_obj = "" for row in _read_row(file_h, fmt, False): # find first word if row[0][0:_RPL] == _RPL_PREFIX: first_word = \ row[0][_RPL:_RPL + row[0][_RPL:].find(' ', _RPL)].upper() else: first_word = "" if row[0][0:_RPL] == _RPL_PREFIX and first_word in _RPL_COMMANDS: # join the parts if CSV or TAB if fmt in ['csv', 'tab']: # pylint: disable=E1310 yield("RPL_COMMAND", ", ".join(row).strip("--")) else: yield("RPL_COMMAND", row[0][_RPL:]) continue if row[0][0:_RPL] == _RPL_PREFIX and \ len(row[0]) > _GTID_PREFIX + _RPL and \ row[0][_RPL:_GTID_PREFIX + _RPL] in _GTID_COMMANDS: yield("GTID_COMMAND", row[0][_RPL:]) continue # Check for basic command if (first_word == "" and row[0][0:row[0].find(' ')].upper() in _BASIC_COMMANDS): yield("BASIC_COMMAND", row[0]) continue # Check to see if we have a marker for rows of objects or data for obj in _IMPORT_LIST: if _check_for_object_list(row[0], obj): if obj == _DATA_DECORATE: found_obj = "TABLE_DATA" cmd_type = "DATA" # We have a new table! name = row[0][len(_DATA_DECORATE) + 2:len(row[0])] name = name.strip() db_tbl_name = name.strip(":") yield ("BEGIN_DATA", db_tbl_name) else: found_obj = obj cmd_type = obj else: found_obj = "" if found_obj != "": break if found_obj != "": # For files with multiple databases, metadata about the # cmd_types appears more than once. Each time we are at a new # cmd_type we keep the first occurrence of such metadata and # ignore the rest of the occurrences. # reset the first_occurrence flag each time we change cmd_type if previous_cmd_type is None or previous_cmd_type != cmd_type: first_occurrence = True previous_cmd_type = cmd_type if first_occurrence: first_occurrence = False else: skip_next_line = True continue else: # We're reading rows here if (len(row[0]) > 0 and (row[0][0] == "#" or row[0][0:2] == "--")): continue else: # skip column_names only if we're not dealing with DATA if skip_next_line and cmd_type != 'DATA': skip_next_line = False continue else: yield (cmd_type, row) def _get_db(row): """Get the database name from the object. row[in] A row (list) of information from the file Returns (string) database name or None if not found """ db_name = None if row[0] in _DEFINITION_LIST or row[0] == "sql": if row[0] == "sql": # Need crude parse here for database statement. parts = row[1].split() # Identify the database name in statements: # DROP {DATABASE | SCHEMA} [IF EXISTS] db_name # CREATE {DATABASE | SCHEMA} [IF NOT EXISTS] db_name if (parts[0] in ('DROP', 'CREATE') and parts[1] in ('DATABASE', 'SCHEMA')): db_name = parts[len(parts) - 1].rstrip().strip(";") # USE db_name elif parts[0] == 'USE': db_name = parts[1].rstrip().strip(";") else: if row[0] == "GRANT": db_name = row[1][2] else: if len(row[1][0]) > 0 and \ row[1][0].upper() not in ('NONE', 'DEF'): db_name = row[1][0] # --display=BRIEF else: db_name = row[1][1] # --display=FULL return db_name def _build_create_table(db_name, tbl_name, engine, columns, col_ref=None, sql_mode=''): """Build the CREATE TABLE command for a table. This method uses the data from the _read_next() method to build a table from its parts as read from a non-SQL formatted file. db_name[in] Database name for the object tbl_name[in] Name of the table engine[in] Storage engine name for the table columns[in] A list of the column definitions for the table col_ref[in] A dictionary of column names/indexes sql_mode[in] The sql_mode set in the server Returns (string) the CREATE TABLE statement. """ if col_ref is None: col_ref = {} # Quote db_name and tbl_name with backticks if needed if not is_quoted_with_backticks(db_name, sql_mode): db_name = quote_with_backticks(db_name, sql_mode) if not is_quoted_with_backticks(tbl_name, sql_mode): tbl_name = quote_with_backticks(tbl_name, sql_mode) create_str = "CREATE TABLE %s.%s (\n" % (db_name, tbl_name) stop = len(columns) pri_keys = set() keys = set() key_constraints = defaultdict(set) col_name_index = col_ref.get("COLUMN_NAME", 0) col_type_index = col_ref.get("COLUMN_TYPE", 1) is_null_index = col_ref.get("IS_NULLABLE", 2) def_index = col_ref.get("COLUMN_DEFAULT", 3) col_key_index = col_ref.get("COLUMN_KEY", 4) const_name_index = col_ref.get("KEY_CONSTRAINT_NAME", 12) ref_tbl_index = col_ref.get("REFERENCED_TABLE_NAME", 8) ref_schema_index = col_ref.get("REFERENCED_TABLE_SCHEMA", 14) ref_col_index = col_ref.get("COL_NAME", 13) ref_col_ref = col_ref.get("REFERENCED_COLUMN_NAME", 15) ref_const_name = col_ref.get("CONSTRAINT_NAME", 7) update_rule = col_ref.get("UPDATE_RULE", 10) delete_rule = col_ref.get("DELETE_RULE", 11) used_columns = set() for column in range(0, stop): cur_col = columns[column] # Quote column name with backticks if needed col_name = cur_col[col_name_index] if not is_quoted_with_backticks(col_name, sql_mode): col_name = quote_with_backticks(col_name, sql_mode) if col_name not in used_columns: # Only add the column definitions to the CREATE string once. change_line = ",\n" if column > 0 else "" create_str = "{0}{1} {2} {3}".format(create_str, change_line, col_name, cur_col[col_type_index]) if cur_col[is_null_index].upper() != "YES": create_str += " NOT NULL" if len(cur_col[def_index]) > 0 and \ cur_col[def_index].upper() != "NONE": create_str += " DEFAULT %s" % cur_col[def_index] elif cur_col[is_null_index].upper == "YES": create_str += " DEFAULT NULL" # Add column to set of columns already used for the CREATE string. used_columns.add(col_name) if len(cur_col[col_key_index]) > 0: if cur_col[col_key_index] == "PRI": if cur_col[const_name_index] in ('`PRIMARY`', 'PRIMARY'): pri_keys.add(cur_col[ref_col_index]) else: if cur_col[const_name_index] not in ('`PRIMARY`', 'PRIMARY'): keys.add( (col_name, cur_col[col_key_index]) ) if cur_col[ref_col_index].startswith(col_name) and \ (not cur_col[ref_const_name] or cur_col[ref_const_name] == cur_col[const_name_index]): key_constraints[col_name].add( (cur_col[const_name_index], cur_col[ref_schema_index], cur_col[ref_tbl_index], cur_col[ref_col_index], cur_col[ref_col_ref], cur_col[update_rule], cur_col[delete_rule]) ) key_strs = [] const_strs = [] # Create primary key definition string. if len(pri_keys) > 0: key_list = [] for key in pri_keys: # Quote keys with backticks if needed if not is_quoted_with_backticks(key, sql_mode): # Handle multiple columns separated by a comma (,) cols = key.split(',') key = ','.join([quote_with_backticks(col, sql_mode) for col in cols]) key_list.append(key) key_str = "PRIMARY KEY({0})".format(",".join(key_list)) key_strs.append(key_str) for key, column_type in keys: key_type = 'UNIQUE ' if column_type == 'UNI' else '' if not key_constraints[key]: # Handle simple keys # Quote column key with backticks if needed if not is_quoted_with_backticks(key, sql_mode): # Handle multiple columns separated by a comma (,) cols = key.split(',') key = ','.join([quote_with_backticks(col, sql_mode) for col in cols]) key_str = "{key_type}KEY ({column})".format(key_type=key_type, column=key) key_strs.append(key_str) else: # Handle key with constraints for const_def in key_constraints[key]: # Keys for constraints or with specific name key_name = '' if const_def[0] and const_def[0] != const_def[3]: # Quote key name with backticks if needed if not is_quoted_with_backticks(const_def[0], sql_mode): key_name = '{0} '.format( quote_with_backticks(const_def[0], sql_mode)) else: key_name = '{0} '.format(const_def[0]) # Use constraint columns as key if available. if const_def[3]: key = const_def[3] # Quote column key with backticks if needed if not is_quoted_with_backticks(key, sql_mode): # Handle multiple columns separated by a comma (,) cols = key.split(',') key = ','.join([quote_with_backticks(col, sql_mode) for col in cols]) key_str = "{key_type}KEY {key_name}({column})".format( key_type=key_type, key_name=key_name, column=key ) key_strs.append(key_str) if const_def[2]: # Handle constraint (referenced_table_name found) const_name = const_def[0] # Quote constraint name with backticks if needed if const_name and not is_quoted_with_backticks(const_name, sql_mode): const_name = quote_with_backticks(const_name, sql_mode) fkey = const_def[3] # Quote fkey columns with backticks if needed if not is_quoted_with_backticks(fkey, sql_mode): # Handle multiple columns separated by a comma (,) cols = fkey.split(',') fkey = ','.join( [quote_with_backticks(col, sql_mode) for col in cols]) ref_key = const_def[4] # Quote reference key columns with backticks if needed if not is_quoted_with_backticks(ref_key, sql_mode): # Handle multiple columns separated by a comma (,) cols = ref_key.split(',') ref_key = ','.join( [quote_with_backticks(col, sql_mode) for col in cols]) ref_rules = '' if const_def[6] and const_def[6] == 'CASCADE': ref_rules = ' ON DELETE CASCADE' if const_def[5] and const_def[5] == 'CASCADE': ref_rules = '{0} ON UPDATE CASCADE'.format(ref_rules) key_str = (" CONSTRAINT {cstr} FOREIGN KEY ({fk}) " "REFERENCES {ref_schema}.{ref_table} " "({ref_column}){ref_rules}").format( cstr=const_name, fk=fkey, ref_schema=const_def[1], ref_table=const_def[2], ref_column=ref_key, ref_rules=ref_rules) const_strs.append(key_str) # Build remaining CREATE TABLE string key_strs.extend(const_strs) keys_str = ',\n '.join(key_strs) if keys_str: create_str = "{0},\n {1}\n)".format(create_str, keys_str) else: create_str = "{0}\n)".format(create_str) if engine and len(engine) > 0: create_str = "{0} ENGINE={1}".format(create_str, engine) create_str = "{0};".format(create_str) return create_str def _build_column_ref(row): """Build a dictionary of column references row[in] The header with column names. Returns (dictionary) where dict[col_name] = index position """ indexes = {} i = 0 for col in row: indexes[col.upper()] = i i += 1 return indexes def _build_create_objects(obj_type, db, definitions, sql_mode=''): """Build the CREATE and GRANT SQL statements for object definitions. This method takes the object information read from the file using the _read_next() method and constructs SQL definition statements for each object. It receives a block of objects and creates a statement for each object. obj_type[in] The object type db[in] The database definitions[in] The list of object definition data from the file sql_mode[in] The sql_mode set in the server Returns (string[]) - a list of SQL statements for the objects """ create_strings = [] skip_header = True obj_db = "" obj_name = "" col_list = [] stop = len(definitions) col_ref = {} engine = None # Now the tricky part. for i in range(0, stop): if skip_header: skip_header = False col_ref = _build_column_ref(definitions[i]) continue defn = definitions[i] # Read engine from first row and save old value. old_engine = engine engine = defn[col_ref.get("ENGINE", 2)] create_str = "" if obj_type == "TABLE": if obj_db == "" and obj_name == "": obj_db = defn[col_ref.get("TABLE_SCHEMA", 0)] obj_name = defn[col_ref.get("TABLE_NAME", 1)] if (obj_db == defn[col_ref.get("TABLE_SCHEMA", 0)] and obj_name == defn[col_ref.get("TABLE_NAME", 1)]): col_list.append(defn) else: create_str = _build_create_table(obj_db, obj_name, old_engine, col_list, col_ref, sql_mode) create_strings.append(create_str) obj_db = defn[col_ref.get("TABLE_SCHEMA", 0)] obj_name = defn[col_ref.get("TABLE_NAME", 1)] col_list = [defn] # check for end. if i + 1 == stop: create_str = _build_create_table(obj_db, obj_name, engine, col_list, col_ref, sql_mode) create_strings.append(create_str) elif obj_type == "VIEW": # Quote table schema and name with backticks if needed if not is_quoted_with_backticks(defn[col_ref.get("TABLE_SCHEMA", 0)], sql_mode): obj_db = quote_with_backticks(defn[col_ref.get("TABLE_SCHEMA", 0)], sql_mode) else: obj_db = defn[col_ref.get("TABLE_SCHEMA", 0)] if not is_quoted_with_backticks(defn[col_ref.get("TABLE_NAME", 1)], sql_mode): obj_name = quote_with_backticks(defn[col_ref.get("TABLE_NAME", 1)], sql_mode) else: obj_name = defn[col_ref.get("TABLE_NAME", 1)] # Create VIEW statement create_str = ("CREATE ALGORITHM=UNDEFINED DEFINER={defr} " "SQL SECURITY {sec} VIEW {scma}.{tbl} AS {defv}; " ).format(defr=defn[col_ref.get("DEFINER", 2)], sec=defn[col_ref.get("SECURITY_TYPE", 3)], scma=obj_db, tbl=obj_name, defv=defn[col_ref.get("VIEW_DEFINITION", 4)]) create_strings.append(create_str) elif obj_type == "TRIGGER": # Quote required identifiers with backticks obj_db = quote_with_backticks(db, sql_mode) \ if not is_quoted_with_backticks(db, sql_mode) else db if not is_quoted_with_backticks(defn[col_ref.get("TRIGGER_NAME", 0)], sql_mode): obj_name = quote_with_backticks( defn[col_ref.get("TRIGGER_NAME", 0)], sql_mode ) else: obj_name = defn[col_ref.get("TRIGGER_NAME", 0)] if not is_quoted_with_backticks( defn[col_ref.get("EVENT_OBJECT_SCHEMA", 3)], sql_mode): evt_scma = quote_with_backticks( defn[col_ref.get("EVENT_OBJECT_SCHEMA", 3)], sql_mode ) else: evt_scma = defn[col_ref.get("EVENT_OBJECT_SCHEMA", 3)] if not is_quoted_with_backticks( defn[col_ref.get("EVENT_OBJECT_TABLE", 4)], sql_mode): evt_tbl = quote_with_backticks( defn[col_ref.get("EVENT_OBJECT_TABLE", 4)], sql_mode ) else: evt_tbl = defn[col_ref.get("EVENT_OBJECT_TABLE", 4)] # Create TRIGGER statement # Important Note: There is a bug in the server when backticks are # used in the trigger statement, i.e. the ACTION_STATEMENT value in # INFORMATION_SCHEMA.TRIGGERS is incorrect (see BUG##16291011). create_str = ("CREATE DEFINER={defr} " "TRIGGER {scma}.{trg} {act_t} {evt_m} " "ON {evt_s}.{evt_t} FOR EACH {act_o} {act_s};" ).format(defr=defn[col_ref.get("DEFINER", 1)], scma=obj_db, trg=obj_name, act_t=defn[col_ref.get("ACTION_TIMING", 6)], evt_m=defn[col_ref.get("EVENT_MANIPULATION", 2)], evt_s=evt_scma, evt_t=evt_tbl, act_o=defn[col_ref.get("ACTION_ORIENTATION", 5)], act_s=defn[col_ref.get("ACTION_STATEMENT", 7)]) create_strings.append(create_str) elif obj_type in ("PROCEDURE", "FUNCTION"): # Quote required identifiers with backticks obj_db = quote_with_backticks(db, sql_mode) \ if not is_quoted_with_backticks(db, sql_mode) else db if not is_quoted_with_backticks(defn[col_ref.get("NAME", 0)], sql_mode): obj_name = quote_with_backticks(defn[col_ref.get("NAME", 0)], sql_mode) else: obj_name = defn[col_ref.get("NAME", 0)] # Create PROCEDURE or FUNCTION statement if obj_type == "FUNCTION": func_str = " RETURNS %s" % defn[col_ref.get("RETURNS", 7)] if defn[col_ref.get("IS_DETERMINISTI", 3)] == 'YES': func_str = "%s DETERMINISTIC" % func_str else: func_str = "" create_str = ("CREATE DEFINER={defr}" " {type} {scma}.{name}({par_lst})" "{func_ret} {body};" ).format(defr=defn[col_ref.get("DEFINER", 5)], type=obj_type, scma=obj_db, name=obj_name, par_lst=defn[col_ref.get("PARAM_LIST", 6)], func_ret=func_str, body=defn[col_ref.get("BODY", 8)]) create_strings.append(create_str) elif obj_type == "EVENT": # Quote required identifiers with backticks obj_db = quote_with_backticks(db, sql_mode) \ if not is_quoted_with_backticks(db, sql_mode) else db if not is_quoted_with_backticks(defn[col_ref.get("NAME", 0)], sql_mode): obj_name = quote_with_backticks(defn[col_ref.get("NAME", 0)], sql_mode) else: obj_name = defn[col_ref.get("NAME", 0)] # Create EVENT statement create_str = ("CREATE EVENT {scma}.{name} " "ON SCHEDULE EVERY {int_v} {int_f} " "STARTS '{starts}' " ).format(scma=obj_db, name=obj_name, int_v=defn[col_ref.get("INTERVAL_VALUE", 5)], int_f=defn[col_ref.get("INTERVAL_FIELD", 6)], starts=defn[col_ref.get("STARTS", 8)] ) ends_index = col_ref.get("ENDS", 9) if len(defn[ends_index]) > 0 and \ defn[ends_index].upper() != "NONE": create_str = "%s ENDS '%s' " % (create_str, defn[ends_index]) if defn[col_ref.get("ON_COMPLETION", 11)] == "DROP": create_str = "%s ON COMPLETION NOT PRESERVE " % create_str if defn[col_ref.get("STATUS", 10)] == "DISABLED": create_str = "%s DISABLE " % create_str create_str = "%s DO %s;" % (create_str, defn[col_ref.get("BODY", 2)]) create_strings.append(create_str) elif obj_type == "GRANT": try: user, priv, db, tbl = defn[0:4] except: raise UtilError("Object data invalid: %s : %s" % (obj_type, defn)) if not tbl: tbl = "*" elif tbl.upper() == "NONE": tbl = "*" # Quote required identifiers with backticks obj_db = quote_with_backticks(db, sql_mode) \ if not is_quoted_with_backticks(db, sql_mode) else db obj_tbl = quote_with_backticks(tbl, sql_mode) \ if (tbl != '*' and not is_quoted_with_backticks(tbl, sql_mode)) else tbl # Create GRANT statement create_str = "GRANT %s ON %s.%s TO %s" % (priv, obj_db, obj_tbl, user) create_strings.append(create_str) elif obj_type in ["RPL_COMMAND", "GTID_COMMAND"]: create_strings.append([defn]) else: raise UtilError("Unknown object type discovered: %s" % obj_type) return create_strings def _build_col_metadata(obj_type, definitions): """Build a list of column metadata for a table. This method takes the object information read from the file using the _read_next() method and constructs a list of columns for any tables found. obj_type[in] The object type definitions[in] The list of object definition data from the file Returns (column_list[(table_name, [(field_name, definition)])]) """ skip_header = True obj_db = "" obj_name = "" col_list = [] table_col_list = [] stop = len(definitions) # Now the tricky part. for i in range(0, stop): if skip_header: skip_header = False continue defn = definitions[i] if obj_type == "TABLE": if obj_db == "" and obj_name == "": obj_db = defn[0] obj_name = defn[1] if obj_db == defn[0] and obj_name == defn[1]: col_list.append((defn[4], defn[5])) else: table_col_list.append((obj_name, col_list)) obj_db = defn[0] obj_name = defn[1] col_list = [(defn[4], defn[5])] # check for end. if i + 1 == stop: table_col_list.append((obj_name, col_list)) return table_col_list def _build_insert_data(col_names, tbl_name, data): """Build simple INSERT statements for data. col_names[in] A list of column names for the data tbl_name[in] Table name data[in] The data values Returns (string) the INSERT statement. """ # Handle NULL (and None) values, i.e. do not quote them as a string. quoted_data = [ 'NULL' if val in ('NULL', None) else to_sql(val) for val in data ] return "INSERT INTO %s (" % tbl_name + ",".join(col_names) + \ ") VALUES (" + ','.join(quoted_data) + ");" def _skip_sql(sql, options): """Check to see if we skip this SQL statement sql[in] SQL statement to evaluate options[in] Option dictionary containing the --skip_* options Returns (bool) True - skip the statement, False - do not skip """ prefix = sql[0:100].upper().strip() if prefix[0:len("CREATE")] == "CREATE": # need to test for tables, views, events, triggers, proc, func, db index = sql.find(" TABLE ") if index > 0: return options.get("skip_tables", False) index = sql.find(" VIEW ") if index > 0: return options.get("skip_views", False) index = sql.find(" TRIGGER ") if index > 0: return options.get("skip_triggers", False) index = sql.find(" PROCEDURE ") if index > 0: return options.get("skip_procs", False) index = sql.find(" FUNCTION ") if index > 0: return options.get("skip_funcs", False) index = sql.find(" EVENT ") if index > 0: return options.get("skip_events", False) index = sql.find(" DATABASE ") if index > 0: return options.get("skip_create", False) return False # If we skip create_db, need to skip the drop too elif prefix[0:len("DROP")] == "DROP": return options.get("skip_create", False) elif prefix[0:len("GRANT")] == "GRANT": return options.get("skip_grants", False) elif prefix[0:len("INSERT")] == "INSERT": return options.get("skip_data", False) elif prefix[0:len("UPDATE")] == "UPDATE": return options.get("skip_blobs", False) elif prefix[0:len("USE")] == "USE": return options.get("skip_create", False) return False def _skip_object(obj_type, options): """Check to see if we skip this object type obj_type[in] Type of object for the --skip_* option (e.g. "tables", "data", "views", etc.) options[in] Option dictionary containing the --skip_* options Returns (bool) True - skip the object, False - do not skip """ obj = obj_type.upper() if obj == "TABLE": return options.get("skip_tables", False) elif obj == "VIEW": return options.get("skip_views", False) elif obj == "TRIGGER": return options.get("skip_triggers", False) elif obj == "PROCEDURE": return options.get("skip_procs", False) elif obj == "FUNCTION": return options.get("skip_funcs", False) elif obj == "EVENT": return options.get("skip_events", False) elif obj == "GRANT": return options.get("skip_grants", False) elif obj == "CREATE_DB": return options.get("skip_create", False) elif obj == "DATA": return options.get("skip_data", False) elif obj == "BLOB": return options.get("skip_blobs", False) else: return False def _exec_statements(statements, destination, fmt, options, dryrun=False): """Execute a list of SQL statements. Execute SQL statements from the provided list in the destination server, according to the provided options. This method also manage autocommit and bulk insert options in order to optimize the performance of the statements execution. statements[in] A list of SQL statements to execute destination[in] A connection to the destination server fmt[in] Format of import file options[in] Option dictionary containing the --skip_* options dryrun[in] If True, print the SQL statements and do not execute Returns (bool) - True if all execute, raises error if one fails """ new_engine = options.get("new_engine", None) def_engine = options.get("def_engine", None) quiet = options.get("quiet", False) autocommit = options.get('autocommit', False) bulk_insert = not options.get('single', True) # Set autocommit and query options adequately. if autocommit and not destination.autocommit_set(): destination.toggle_autocommit(enable=1) elif not autocommit and destination.autocommit_set(): destination.toggle_autocommit(enable=0) query_opts = {'fetch': False, 'columns': False, 'commit': False} if bulk_insert: max_inserts = options.get('max_bulk_insert', 30000) count = 0 bulk_insert_start = None bulk_values = [] # Compile regexp to split INSERT values here, in order to reuse it # and improve performance of _parse_insert_statement(). re_value_split = re.compile("VALUES?", re.IGNORECASE) exec_commit = False # Process all statements. for statement in statements: # Each statement can be either a string or a list of strings (BLOB # statements). if (isinstance(statement, str) and (new_engine is not None or def_engine is not None) and statement[0:12].upper() == "CREATE TABLE"): # Add statements to substitute engine. i = statement.find(' ', 13) tbl_name = statement[13:i] st_list = destination.substitute_engine(tbl_name, statement, new_engine, def_engine, quiet) elif bulk_insert: # Bulk insert (if possible) to execute as a single statement. # Need to guard against lists of BLOB statements. if (isinstance(statement, str) and statement[0:6].upper().startswith('INSERT')): # Parse INSERT statement. insert_start, values = _parse_insert_statement(statement, re_value_split) if values is None: # Cannot bulk insert. if bulk_values: # Existing bulk insert to process. st_list = [",".join(bulk_values)] bulk_values = [] count = 0 else: st_list = [] st_list.append(statement) elif not bulk_values: # Start creating a new bulk insert. bulk_insert_start = insert_start bulk_values.append( "{0} VALUES {1}".format(bulk_insert_start, values) ) count += 1 st_list = [] elif insert_start != bulk_insert_start: # Different INSERT found (table, options or syntax), # generate bulk insert statement to execute and initiate a # new bulk insert. st_list = [",".join(bulk_values)] bulk_values = [] count = 0 bulk_insert_start = insert_start bulk_values.append( "{0} VALUES {1}".format(bulk_insert_start, values) ) count += 1 elif count >= max_inserts: # Maximum bulk insert size reached (to avoid broken pipe # error), generate bulk to execute and initiate new one. st_list = [",".join(bulk_values)] bulk_values = [] count = 0 bulk_values.append( "{0} VALUES {1}".format(bulk_insert_start, values) ) else: bulk_values.append(values) count += 1 st_list = [] else: # Can be a regular statement or a list of BLOB statements # that must not be bundled together. if bulk_values: # Existing bulk insert to process. st_list = [",".join(bulk_values)] bulk_values = [] count = 0 else: st_list = [] if isinstance(statement, list): # list of BLOB data statements, either updates or inserts. st_list.extend(statement) else: # Other statements. st_list.append(statement) else: # Common statement, just add it to be executed. st_list = [statement] # Execute statements list. for st in st_list: # Execute query. try: if dryrun: print(st) elif fmt != "sql" or not _skip_sql(st, options): # Check query type to determine if a COMMIT is needed, in # order to avoid Error 1694 (Cannot modify SQL_LOG_BIN # inside transaction). if not autocommit: if st[0:_GTID_PREFIX].upper() == _SQL_LOG_BIN_CMD: # SET SQL_LOG_BIN command found. destination.commit() exec_commit = True destination.exec_query(st, options=query_opts) if exec_commit: # For safety, COMMIT after SET SQL_LOG_BIN command. destination.commit() exec_commit = False # It is not a good practice to catch the base Exception class, # instead all errors should be caught in a Util/Connector error. # Exception is only caught for safety (unanticipated errors). except UtilError as err: raise UtilError("Invalid statement:\n{0}" "\nERROR: {1}".format(st, err.errmsg)) except Exception as err: raise UtilError("Unexpected error:\n{0}".format(err)) if bulk_insert and bulk_values: # Make sure last bulk insert is executed. st = ",".join(bulk_values) try: if dryrun: print(st) elif fmt != "sql" or not _skip_sql(st, options): destination.exec_query(st, options=query_opts) except UtilError as err: raise UtilError("Invalid statement:\n{0}" "\nERROR: {1}".format(st, err.errmsg)) except Exception as err: # Exception is only caught for safety (unanticipated errors). raise UtilError("Unexpected error:\n{0}".format(err)) # Commit at the end (if autocommit is disabled). if not autocommit: destination.commit() return True def _parse_insert_statement(insert_stmt, regexp_split_values=None): """Parse an INSERT statement to build bulk insert. This method parses INSERT statements, separating the VALUES tuple from the beginning of the query (in order to build bulk insert). The method also verify if the statement is already a bulk insert or use unsupported options/syntax, an in this case the initial statement is returned without any separated values. insert_stmt[in] INSERT statement to be parsed. regexp_split_values[in] Compiled regular expression to split the VALUES|VALUE of the INSERT statement. This parameter can be used for performance reason, avoiding compiling the regexp at each call if not specified. Returns a tuple with the start of the INSERT statement (without values) and the values, or the full statement and none if the INSERT syntax or query options are not supported or it is already a bulk insert. """ if not regexp_split_values: # Split statement by VALUES|VALUE. regexp_split_values = re.compile("VALUES?", re.IGNORECASE) insert_values = regexp_split_values.split(insert_stmt) try: values = insert_values[1] except IndexError: # INSERT statement does not contain 'VALUES'. # The following syntax are not supported to build bulk inserts: # - INSERT INTO tbl_name SET col_name= expr, ... # - INSERT INTO tbl_name SELECT ... return insert_stmt, None values = values.strip(' ;') # Check if already a bulk insert (if it has more than one tuple of values), # or if other options are used at the end (e.g., ON DUPLICATE KEY UPDATE). # In those cases, the original statement is returned (no bulk insert). prev_char = '' found = 0 skip_in_str = False # Find first closing bracket ')', end of first VALUES tuple. # Note: need to ignore ')' in strings. for idx, char in enumerate(values[1:]): if char == "'" and prev_char != '\\': skip_in_str = not skip_in_str elif char == ')' and not skip_in_str: found = idx + 2 # 1 + 1 (skip first char + need to check next). break prev_char = char # Check if there are more values/options after the first closing bracket. if len(values[found:]) > 1: return insert_stmt, None # Return original statement (not supported). return insert_values[0].strip(), values def _get_column_metadata(tbl_class, table_col_list): """Get the column metadata from the list of columns. tbl_class[in] Class instance for table table_col_list[in] List of table columns for all tables """ for tbl_col_def in table_col_list: if tbl_col_def[0] == tbl_class.q_tbl_name: tbl_class.get_column_metadata(tbl_col_def[1]) return True return False def multiprocess_file_import_task(import_file_task): """Multiprocess import file method. This method wraps the import_file method to allow its concurrent execution by a pool of processes. import_file_task[in] dictionary of values required by a process to perform the file import task, namely: {'srv_con': , 'file_name': , 'options': , } """ # Get input values to execute task. srv_con_values = import_file_task.get('srv_con') file_name = import_file_task.get('file_name') options = import_file_task.get('options') # Execute import file task. # NOTE: Must handle any exception here, because worker processes will not # propagate them to the main process. try: import_file(srv_con_values, file_name, options) except UtilError: _, err, _ = sys.exc_info() print("ERROR: {0}".format(err.errmsg)) def import_file(dest_val, file_name, options): """Import a file This method reads a file and, if needed, transforms the file into discrete SQL statements for execution on the destination server. It accepts any of the formal structured files produced by the mysqlexport utility including formats SQL, CSV, TAB, GRID, and VERTICAL. It will read these files and skip or include the definitions or data as specified in the options. An error is raised for any conversion errors or errors while executing the statements. Users are highly encouraged to use the --dryrun option which will print the SQL statements without executing them. dest_val[in] a dictionary containing connection information for the destination including: (user, password, host, port, socket) file_name[in] name (and path) of the file to import options[in] a dictionary containing the options for the import: (skip_tables, skip_views, skip_triggers, skip_procs, skip_funcs, skip_events, skip_grants, skip_create, skip_data, no_header, display, format, and debug) Returns bool True = success, False = error """ def _process_definitions(statements, table_col_list, db_name, sql_mode): """Helper method to dig through the definitions for create statements """ # First, get the SQL strings sql_strs = _build_create_objects(obj_type, db_name, definitions, sql_mode) statements.extend(sql_strs) # Now, save the column list col_list = _build_col_metadata(obj_type, definitions) if len(col_list) > 0: table_col_list.extend(col_list) def _process_data(tbl_name, statements, columns, table_col_list, table_rows, skip_blobs, use_columns_names=False): """Process data: If there is data here, build bulk inserts First, create table reference, then call insert_rows() """ tbl = Table(destination, tbl_name) # Need to check to see if table exists! if tbl.exists(): columns_defn = None if use_columns_names: # Get columns definitions res = tbl.server.exec_query("explain {0}".format(tbl_name)) # Only add selected columns columns_defn = [row for row in res if row[0] in columns] # Sort by selected columns definitions columns_defn.sort(key=lambda item: columns.index(item[0])) tbl.get_column_metadata(columns_defn) col_meta = True elif len(table_col_list) > 0: col_meta = _get_column_metadata(tbl, table_col_list) else: fix_cols = [(tbl.tbl_name, columns)] col_meta = _get_column_metadata(tbl, fix_cols) if not col_meta: raise UtilError("Cannot build bulk insert statements without " "the table definition.") columns_names = columns[:] if use_columns_names else None ins_strs = tbl.make_bulk_insert(table_rows, tbl.q_db_name, columns_names, skip_blobs=skip_blobs) if len(ins_strs[0]) > 0: statements.extend(ins_strs[0]) # If we have BLOB statements, lets put them in a list together, to # distinguish them from normal statements and prevent them from being # bundled together later in the _exec_statements function. if len(ins_strs[1]) > 0 and not skip_blobs: statements.extend([ins_strs[1]]) # Gather options fmt = options.get("format", "sql") no_headers = options.get("no_headers", False) quiet = options.get("quiet", False) import_type = options.get("import_type", "definitions") single = options.get("single", True) dryrun = options.get("dryrun", False) do_drop = options.get("do_drop", False) skip_blobs = options.get("skip_blobs", False) skip_gtid = options.get("skip_gtid", False) # Attempt to connect to the destination server conn_options = { 'quiet': quiet, 'version': "5.1.30", } servers = connect_servers(dest_val, None, conn_options) destination = servers[0] # Check storage engines check_engine_options(destination, options.get("new_engine", None), options.get("def_engine", None), False, options.get("quiet", False)) if not quiet: if import_type == "both": text = "definitions and data" else: text = import_type print("# Importing {0} from {1}.".format(text, file_name)) # Setup variables we will need skip_header = not no_headers if fmt == "sql": skip_header = False get_db = True check_privileges = True db_name = None file_h = open(file_name) columns = [] read_columns = False has_data = False use_columns_names = False table_rows = [] obj_type = "" definitions = [] statements = [] table_col_list = [] tbl_name = "" skip_rpl = options.get("skip_rpl", False) gtid_command_found = False supports_gtid = servers[0].supports_gtid() == 'ON' skip_gtid_warning_printed = False gtid_version_checked = False sql_mode = destination.select_variable("SQL_MODE") if fmt == "raw_csv": # Use the first row as columns read_columns = True # Use columns names in INSERT statement use_columns_names = True table = options.get("table", None) (db_name_part, tbl_name_part) = parse_object_name(table, sql_mode) # Work with quoted objects db_name = (db_name_part if is_quoted_with_backticks(db_name_part, sql_mode) else quote_with_backticks(db_name_part, sql_mode)) tbl_name = (tbl_name_part if is_quoted_with_backticks(tbl_name_part, sql_mode) else quote_with_backticks(tbl_name_part, sql_mode)) tbl_name = ".".join([db_name, tbl_name]) # Check database existence and permissions dest_db = Database(destination, db_name) if not dest_db.exists(): raise UtilDBError( "The database does not exist: {0}".format(db_name) ) # Check user permissions for write dest_db.check_write_access(dest_val['user'], dest_val['host'], options) check_privileges = False # No need to check privileges again. # Check table existence tbl = Table(destination, tbl_name) if not tbl.exists(): raise UtilDBError("The table does not exist: {0}".format(table)) # Read the file one object/definition group at a time databases = [] for row in read_next(file_h, fmt): # Check if --format=raw_csv if fmt == "raw_csv": if read_columns: # Use the first row as columns names columns = row[:] read_columns = False continue if single: statements.append(_build_insert_data(columns, tbl_name, row)) else: table_rows.append(row) has_data = True continue # Check for replication command if row[0] == "RPL_COMMAND": if not skip_rpl: statements.append(row[1]) continue if row[0] == "GTID_COMMAND": gtid_command_found = True if not supports_gtid: # only display warning once if not skip_gtid_warning_printed: print _GTID_SKIP_WARNING skip_gtid_warning_printed = True elif not skip_gtid: if not gtid_version_checked: gtid_version_checked = True # Check GTID version for complete feature support servers[0].check_gtid_version() # Check the gtid_purged value too servers[0].check_gtid_executed("import") statements.append(row[1]) continue # Check for basic command if row[0] == "BASIC_COMMAND": if import_type != "data" or "FOREIGN_KEY_CHECKS" in row[1].upper(): # Process existing data rows to keep execution order. if len(table_rows) > 0: _process_data(tbl_name, statements, columns, table_col_list, table_rows, skip_blobs, use_columns_names) table_rows = [] # Now, add command to to the statements list. statements.append(row[1]) continue # In the first pass, try to get the database name from the file if row[0] == "TABLE": db = _get_db(row) if db not in ["TABLE_SCHEMA", "TABLE_CATALOG"] and \ db not in databases: databases.append(db) get_db = True if get_db: if skip_header: skip_header = False else: db_name = _get_db(row) # quote db_name with backticks if needed if db_name and not is_quoted_with_backticks(db_name, sql_mode): db_name = quote_with_backticks(db_name, sql_mode) # No need to get the db_name when found. get_db = False if db_name else get_db if do_drop and import_type != "data": statements.append("DROP DATABASE IF EXISTS %s;" % db_name) if import_type != "data": # If has a CREATE DATABASE statement and the database # exists and the --drop-first option is not provided, # issue an error message if db_name and not do_drop and row[0] == "sql": dest_db = Database(destination, db_name) if dest_db.exists() and \ row[1].upper().startswith("CREATE DATABASE"): raise UtilDBError("The database {0} exists. " "Use --drop-first to drop the " "database before importing." "".format(db_name)) if not _skip_object("CREATE_DB", options) and \ not fmt == 'sql': statements.append("CREATE DATABASE %s;" % db_name) # This is the first time through the loop so we must # check user permissions on source for all databases if check_privileges and db_name: dest_db = Database(destination, db_name) # Make a dictionary of the options access_options = options.copy() dest_db.check_write_access(dest_val['user'], dest_val['host'], access_options) check_privileges = False # No need to check privileges again. # Now check to see if we want definitions, data, or both: if row[0] == "sql" or row[0] in _DEFINITION_LIST: if fmt != "sql" and len(row[1]) == 1: raise UtilError("Cannot read an import file generated with " "--display=NAMES") if import_type in ("definitions", "both"): if fmt == "sql": statements.append(row[1]) else: if obj_type == "": obj_type = row[0] if obj_type != row[0]: if len(definitions) > 0: _process_definitions(statements, table_col_list, db_name, sql_mode) obj_type = row[0] definitions = [] if not _skip_object(row[0], options): definitions.append(row[1]) else: # see if there are any definitions to process if len(definitions) > 0: _process_definitions(statements, table_col_list, db_name, sql_mode) definitions = [] if import_type in ("data", "both"): if _skip_object("DATA", options): continue # skip data elif fmt == "sql": statements.append(row[1]) has_data = True else: if row[0] == "BEGIN_DATA": # Start of table so first row is columns. if len(table_rows) > 0: _process_data(tbl_name, statements, columns, table_col_list, table_rows, skip_blobs) table_rows = [] read_columns = True tbl_name = row[1] if not is_quoted_with_backticks(tbl_name, sql_mode): db, _, tbl = tbl_name.partition('.') q_db = quote_with_backticks(db, sql_mode) q_tbl = quote_with_backticks(tbl, sql_mode) tbl_name = ".".join([q_db, q_tbl]) else: if read_columns: columns = row[1] read_columns = False else: if not single: # Convert 'NULL' to None to be correctly # handled internally data = [None if val == 'NULL' else val for val in row[1]] table_rows.append(data) has_data = True else: text = _build_insert_data(columns, tbl_name, row[1]) statements.append(text) has_data = True # Process remaining definitions if len(definitions) > 0: _process_definitions(statements, table_col_list, db_name, sql_mode) definitions = [] # Process remaining data rows if len(table_rows) > 0: _process_data(tbl_name, statements, columns, table_col_list, table_rows, skip_blobs, use_columns_names) elif import_type == "data" and not has_data: print("# WARNING: No data was found.") # Now process the statements _exec_statements(statements, destination, fmt, options, dryrun) file_h.close() # Check gtid process if supports_gtid and not gtid_command_found: print(_GTID_MISSING_WARNING) if not quiet: if options['multiprocess'] > 1: # Indicate processed file for multiprocessing. print("#...done. ({0})".format(file_name)) else: print("#...done.") return True mysql-utilities-1.6.4/mysql/utilities/command/failover_console.py0000644001577100752670000006546112747670311025107 0ustar pb2usercommon# # Copyright (c) 2010, 2015 Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the automatic failover console. It contains only the user interface code for the automatic failover feature for replication. """ import logging import os import sys import time import struct from mysql.utilities.exception import UtilRplError from mysql.utilities.common.format import format_tabular_list, print_list _CONSOLE_HEADER = "MySQL Replication Failover Utility" _CONSOLE_FOOTER = "Q-quit R-refresh H-health G-GTID Lists U-UUIDs" _CONSOLE_FOOTER_NO_KEYBOARD = "Press CTRL+C to quit" _COMMAND_KEYS = {'\x1b[A': 'ARROW_UP', '\x1b[B': 'ARROW_DN'} # Minimum number of rows needed to display screen _MINIMUM_ROWS = 15 _HEALTH_LIST = "Replication Health Status" _MASTER_GTID_LIST = "Master GTID Executed Set" _MASTER_GTID_COLS = ['gtid'] _GTID_LISTS = ["Transactions executed on the servers:", "Transactions purged from the servers:", "Transactions owned by another server:"] _UUID_LIST = "UUIDs" _LOG_LIST = "Log File" _GEN_UUID_COLS = ['host', 'port', 'role', 'uuid'] _GEN_GTID_COLS = ['host', 'port', 'role', 'gtid'] _DATE_LEN = 22 _DROP_FC_TABLE = "DROP TABLE IF EXISTS mysql.failover_console" _CREATE_FC_TABLE = ("CREATE TABLE IF NOT EXISTS mysql.failover_console " "(host char(255), port char(10))") _SELECT_FC_TABLE = ("SELECT * FROM mysql.failover_console WHERE host = '%s' " "AND port = '%s'") _INSERT_FC_TABLE = "INSERT INTO mysql.failover_console VALUES ('%s', '%s')" _DELETE_FC_TABLE = ("DELETE FROM mysql.failover_console WHERE host = '%s' " "AND port = '%s'") # Idle time (in seconds) for polling user input to avoid high CPU usage. _IDLE_TIME_INPUT_POLLING = 0.01 # 10 ms # Try to import the windows getch() if it fails, we're on Posix so define # a custom getch() method to return keys. try: # Win32 from msvcrt import getch, kbhit # pylint: disable=F0401 except ImportError: # UNIX/Posix import termios from select import select def getch(): """Make a get character keyboard method for Posix machines. """ fd = sys.stdin.fileno() old = termios.tcgetattr(fd) new = termios.tcgetattr(fd) new[3] = new[3] & ~termios.ICANON & ~termios.ECHO new[6][termios.VMIN] = 1 new[6][termios.VTIME] = 0 termios.tcsetattr(fd, termios.TCSANOW, new) key = None try: key = os.read(fd, 4) finally: termios.tcsetattr(fd, termios.TCSAFLUSH, old) return key def kbhit(): """Make a keyboard hit method for Posix machines. """ # Use a timeout != 0 to avoid 100% CPU usage for polling user input. return select([sys.stdin], [], [], _IDLE_TIME_INPUT_POLLING) == ([sys.stdin], [], []) def get_terminal_size(): """Return the size in columns, rows for terminal window This method will attempt to determine the current terminal window size. If it cannot, it returns the default of (80, 25) = 80 characters on a line and 25 lines. Returns tuple - (x, y) = max colum (# chars), max rows """ default = (80, 25) try: if os.name == "posix": import fcntl import termios y, x = 0, 1 packed_info = fcntl.ioctl(0, termios.TIOCGWINSZ, struct.pack('HHHH', 0, 0, 0, 0)) wininfo = struct.unpack('HHHH', packed_info) return (wininfo[x], wininfo[y]) else: from ctypes import windll, create_string_buffer # -11 == stdout handle = windll.kernel32.GetStdHandle(-11) strbuff = create_string_buffer(22) windll.kernel32.GetConsoleScreenBufferInfo(handle, strbuff) left, top, right, bottom = 5, 6, 7, 8 wininfo = struct.unpack("hhhhHhhhhhh", strbuff) x = wininfo[right] - wininfo[left] + 1 y = wininfo[bottom] - wininfo[top] + 1 return (x, y) except: pass # silence! just return default on error. return default class FailoverConsole(object): """Automatic Failover Console This class implements a basic, text screen console for displaying information about the master and the replication health for the topology. The interface supports these commands: - H = show replication health - G = toggle through GTID lists (GTID_EXECUTED, GTID_PURGED, GTID_OWNED) - U = show UUIDs of servers - R = refresh screen - L = (iff --log specified) show log contents - Q = quit the console """ def __init__(self, master, get_health_data, get_gtid_data, get_uuid_data, options): """Constructor The constructor requires the caller to specify a master of the Master class instance, and method pointers for getting health, gtid, and uuid information. An options dictionary is used to define overal behavior of the class methods. master[in] a Master class instance get_health_data[in] method pointer to heatlh data method get_gtid_data[in] method pointer to gtid data method get_uuid_data[in] method pointer to uuid data method options[in] option dictionary to include interval time in seconds for interval loop, default = 15 failover_mode failover mode (used for reporting only), default = 'auto' """ self.interval = int(options.get("interval", 15)) self.pingtime = options.get("pingtime", 3) self.mode = options.get("failover_mode", "auto") self.logging = options.get("logging", False) self.log_file = options.get("log_file", None) # If the option --no-keyboard is provided, the menu will be disabled # and any keyboard request will be ignored. self.no_keyboard = options.get("no_keyboard", False) self.alarm = time.time() + self.interval self.gtid_list = -1 self.scroll_size = 0 self.start_list = 0 self.end_list = 0 self.stop_list = 0 self.rows_printed = 0 self.max_cols = 80 self.max_rows = 24 self.list_data = None self.comment = _HEALTH_LIST self.scroll_on = False self.old_mode = None self.master_gtids = [] # Dictionary that holds the current warning messages self.warnings_dic = {} # Callback methods for reading data self.master = master self.get_health_data = get_health_data self.get_gtid_data = get_gtid_data self.get_uuid_data = get_uuid_data self.report_mode = 'H' self._reset_screen_size() def register_instance(self, clear=False, register=True): """Register the console as running on the master. This method will attempt to register the console as running against the master for failover modes auto or elect. If another console is already registered, this instance becomes blocked resulting in the mode change to 'fail' and failover will not occur when this instance of the console detects failover. clear[in] if True, clear the sentinel database entries on the master. Default is False. register[in] if True, register the console on the master. If False, unregister the console on the master. Default is True. Returns string - new mode if changed """ # We cannot check disconnected masters and do not need to check if # we are doing a simple fail mode. if self.master is None or self.mode == 'fail': return self.mode # Turn binary log off first self.master.toggle_binlog("DISABLE") host_port = (self.master.host, self.master.port) # Drop the table if specified if clear: self.master.exec_query(_DROP_FC_TABLE) # Register the console if register: res = self.master.exec_query(_CREATE_FC_TABLE) res = self.master.exec_query(_SELECT_FC_TABLE % host_port) # COMMIT to close session before enabling binlog. self.master.commit() if res != []: # Someone beat us there. Drat. self.old_mode = self.mode self.mode = 'fail' else: # We're first! Yippee. res = self.master.exec_query(_INSERT_FC_TABLE % host_port) # Unregister the console if our mode was changed elif self.old_mode != self.mode: res = self.master.exec_query(_DELETE_FC_TABLE % host_port) # Turn binary log on self.master.toggle_binlog("ENABLE") return self.mode def unregister_slaves(self, topology): """Unregister the daemon as running on the slaves. This method will unregister the daemon that was previously registered on the slaves, for failover modes auto or elect. """ if self.master is None or self.mode == 'fail': return for slave_dict in topology.slaves: slave_instance = slave_dict["instance"] # Skip unreachable/not connected slaves. if slave_instance and slave_instance.is_alive(): # Turn binary log off first slave_instance.toggle_binlog("DISABLE") # Drop failover instance registration table. slave_instance.exec_query(_DROP_FC_TABLE) # Turn binary log on slave_instance.toggle_binlog("ENABLE") def _reset_interval(self, interval=15): """Reset the interval timing """ self.interval = interval self.alarm = self.interval + time.time() def _reset_screen_size(self): """Recalculate the screen size """ self.max_cols, self.max_rows = get_terminal_size() if self.max_rows < _MINIMUM_ROWS: self.max_rows = _MINIMUM_ROWS def _format_gtid_data(self): """Get the formatted GTID data This method sets the member list_data to the GTID list to populate the list. A subsequent call to _print_list() displays the new list. """ rows = [] # Get GTID lists self.gtid_list += 1 if self.gtid_list > 3: self.gtid_list = 0 if self.gtid_list == 0 and self.master_gtids: self.comment = _MASTER_GTID_LIST rows = self.master_gtids elif self.get_gtid_data: try: gtid_data = self.get_gtid_data() except Exception as err: raise UtilRplError("Cannot get GTID data: {0}".format(err)) self.comment = _GTID_LISTS[self.gtid_list - 1] rows = gtid_data[self.gtid_list - 1] self.start_list = 0 self.end_list = len(rows) self.report_mode = 'G' if self.gtid_list == 0: return (_MASTER_GTID_COLS, rows) else: return (_GEN_GTID_COLS, rows) def _format_health_data(self): """Get the formatted health data This method sets the member list_data to the health list to populate the list. A subsequent call to _print_list() displays the new list. """ # Get health information if self.get_health_data is not None: try: health_data = self.get_health_data() except Exception as err: raise UtilRplError("Cannot get health data: {0}".format(err)) self.start_list = 0 self.end_list = len(health_data[1]) self.report_mode = 'H' return health_data return ([], []) def _format_uuid_data(self): """Get the formatted UUID data This method sets the member list_data to the UUID list to populate the list. A subsequent call to _print_list() displays the new list. """ rows = [] # Get UUID information if self.get_uuid_data is not None: self.comment = _UUID_LIST try: rows = self.get_uuid_data() except Exception as err: raise UtilRplError("Cannot get UUID data: {0}".format(err)) self.start_list = 0 self.end_list = len(rows) self.report_mode = 'U' return (_GEN_UUID_COLS, rows) def _format_log_entries(self): """Get the log data if logging is on This method sets the member list_data to the log entries to populate the list if logging is enables. A subsequent call to _print_list() displays the new list. """ rows = [] cols = ["Date", "Entry"] if self.logging and self.log_file is not None: self.comment = _LOG_LIST log = open(self.log_file, "r") for row in log.readlines(): rows.append( (row[0:_DATE_LEN], row[_DATE_LEN + 1:].strip('\n'))) log.close() self.start_list = 0 self.end_list = len(rows) self.report_mode = 'L' return(cols, rows) def _do_command(self, key): """Execute the user command representing the key pressed This method executes the command based on the key pressed. Commands recognized include show health, toggle through GTID lists, show UUIDs, and scroll list UP/DOWN. The method also checks for resize of the terminal window for nicer, automatic list resize. key[in] key pressed by user Note: Invalid keys are ignored. """ # We check for screen resize here self.max_cols, self.max_rows = get_terminal_size() # Reset the GTID list counter if key not in ['g', 'G']: self.gtid_list = -1 # Refresh if key in ['r', 'R']: self._refresh() # Show GTIDs elif key in ['g', 'G']: self.list_data = self._format_gtid_data() self._print_list() # Show health report elif key in ['h', 'H']: self.list_data = self._format_health_data() self._print_list() elif key in ['u', 'U']: self.list_data = self._format_uuid_data() self._print_list() elif key in ['l', 'L']: if self.logging: self.list_data = self._format_log_entries() self._print_list() elif key in _COMMAND_KEYS: self._scroll(key) def _wait_for_interval(self): """Wait for the time interval to expire This method issues a timing loop to wait for the specified interval to expire or quit if the user presses 'q' or 'Q'. The method passes all other keyboard requests to the _do_command() method for processing. If the interval expires, the method returns None. If the user presses a key, the method returns the numeric key number. Returns - None or int (see above) """ # If on *nix systems, set the terminal IO sys to not echo if not self.no_keyboard and os.name == "posix": import tty import termios old_settings = termios.tcgetattr(sys.stdin) tty.setcbreak(sys.stdin.fileno()) key = None done = False try: # Loop for interval in seconds while detecting keypress while not done: done = self.alarm <= time.time() if not self.no_keyboard and kbhit() and not done: key = getch() done = True if os.name != "posix": # On Windows wait a few ms to avoid 100% CPU usage for # polling input (handled in kbhit() for posix systems). time.sleep(_IDLE_TIME_INPUT_POLLING) finally: # Ensure terminal IO sys is reset to older state. if not self.no_keyboard and os.name == "posix": termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings) return key def clear(self): """Clear the screen This method uses a platform specific terminal screen clear to simulate a clear of the console. """ if os.name == "posix": os.system("clear") else: os.system("cls") self.rows_printed = 0 def _print_header(self): """Display header """ print _CONSOLE_HEADER next_interval = time.ctime(self.alarm) print "Failover Mode =", self.mode, " Next Interval =", \ next_interval if self.old_mode is not None and self.old_mode != self.mode: print print "NOTICE: Failover mode changed to fail due to another" print " instance of the console running against master." self.rows_printed += 2 self.max_rows -= 3 print self.rows_printed += 4 def _print_master_status(self): """Display the master information This method displays the master information from SHOW MASTER STATUS. """ # If no master present, don't print anything. if self.master is None: return try: status = self.master.get_status()[0] if self.logging: logging.info("Master status: binlog: {0}, position:{1}" "".format(status[0], status[1])) except Exception as err: raise UtilRplError("Cannot get master status: {0}".format(err)) print "Master Information" print "------------------" cols = ("Binary Log File", "Position", "Binlog_Do_DB", "Binlog_Ignore_DB") fmt_opts = { "print_header": True, "separator": None, "quiet": True, "print_footer": False, } logfile = status[0][0:20] if len(status[0]) > 20 else status[0] rows = [(logfile, status[1], status[2], status[3])] format_tabular_list(sys.stdout, cols, rows, fmt_opts) # Display gtid executed set self.master_gtids = [] for gtid in status[4].split("\n"): if len(gtid): # Add each GTID to a tuple to match the required format to # print the full GRID list correctly. self.master_gtids.append((gtid.strip(","),)) print "\nGTID Executed Set" try: print self.master_gtids[0][0], except IndexError: print "None", if len(self.master_gtids) > 1: print "[...]" else: print print self.rows_printed += 7 def _print_warnings(self): """Print current warning messages This method displays current warning messages if they exist. """ # Only do something if warnings exist. if self.warnings_dic: for msg in self.warnings_dic.itervalues(): print("WARNING: {0}".format(msg)) self.rows_printed += 1 def add_warning(self, warning_key, warning_msg): """Add a warning message to the current dictionary of warnings. warning_key[in] key associated with the warning message to add. warning_msg[in] warning message to add to the current dictionary of warnings. """ self.warnings_dic[warning_key] = warning_msg def del_warning(self, warning_key): """Remove a warning message from the current dictionary of warnings. warning_key[in] key associated with the warning message to remove. """ if warning_key in self.warnings_dic: del self.warnings_dic[warning_key] def _scroll(self, key): """Scroll the list view This method recalculates the start_list and end_list member variables depending on the key pressed. UP moves the list up (lower row indexes) and DOWN moves the list down (higher row indexes). It calls _print_list() at the end to redraw the screen. key[in] key pressed by user Note: Invalid keys are ignored. """ if _COMMAND_KEYS[key] == 'ARROW_UP': if self.start_list > 0: self.start_list -= self.scroll_size if self.start_list < 0: self.start_list = 0 self.stop_list = self.scroll_size else: return # Cannot scroll up any further elif _COMMAND_KEYS[key] == 'ARROW_DN': if self.end_list < len(self.list_data[1]): self.start_list = self.end_list self.end_list += self.scroll_size if self.end_list > len(self.list_data[1]): self.end_list = len(self.list_data[1]) else: return # Cannot scroll down any further else: return # Not a valid scroll key self._print_list(True) def _print_list(self, refresh=True, comment=None): """Display the list information This method displays the list information using the start_list and end_list member variables to control the view of the data. This permits users to scroll through the data should it be longer than the space permitted on the screen. """ # If no data to print, exit if self.list_data is None: return if refresh: self.clear() self._print_header() self._print_master_status() # Print list name if comment is None: comment = self.comment print comment self.rows_printed += 1 # Print the list in the remaining space footer_len = 2 remaining_rows = self.max_rows - self.rows_printed - 4 - footer_len if len(self.list_data[1][self.start_list:self.end_list]) > \ remaining_rows: rows = self.list_data[1][self.start_list:self.start_list + remaining_rows] self.end_list = self.start_list + remaining_rows self.scroll_on = True else: if len(self.list_data[1]) == self.end_list and \ self.start_list == 0: self.scroll_on = False rows = self.list_data[1][self.start_list:self.end_list] if len(rows) > 0: self.scroll_size = len(rows) print_list(sys.stdout, 'GRID', self.list_data[0], rows) self.rows_printed += self.scroll_size + 4 else: print "0 Rows Found." self.rows_printed += 1 if refresh: self._print_footer(self.scroll_on) def _print_footer(self, scroll=False): """Print the footer This method prints the footer for the console consisting of the user commands permitted. scroll[in] if True, display scroll commands """ # Print blank lines fill screen i = self.rows_printed while i < self.max_rows - 2: print i += 1 # Show bottom menu options footer = [] if self.no_keyboard: # No support for keyboard, disable menu footer.append(_CONSOLE_FOOTER_NO_KEYBOARD) else: footer.append(_CONSOLE_FOOTER) # If logging enabled, show command if self.logging: footer.append("L-log entries") if scroll: footer.append("Up|Down-scroll") print(" ".join(footer)) self.rows_printed = self.max_rows def _refresh(self): """Refresh the console This method redraws the console resetting screen size if the command/terminal window was resized since last action. """ self.clear() self._reset_screen_size() self._print_header() self._print_master_status() self._print_warnings() # refresh health if already displayed if self.report_mode == 'H': self.list_data = self._format_health_data() self._print_list(False) self._print_footer(self.scroll_on) def _reconnect_master(self, pingtime=3): """Tries to reconnect to the master This method tries to reconnect to the master and if connection fails after 3 attemps, returns False. """ if self.master and self.master.is_alive(): return True is_connected = False i = 0 while i < 3: try: self.master.connect() is_connected = True break except: pass time.sleep(pingtime) i += 1 return is_connected def display_console(self): """Display the failover console This method presents the information for the failover console. Since there is no UI module in use, it clears the screen and redraws the data again. It uses the method specified in the constructor for getting and refreshing the data. Returns bool - True = user exit no errors, False = errors """ self._reset_interval(self.interval) # Get the data for first printing of the screen if self.list_data is None: self.list_data = self.get_health_data() self.start_list = 0 self.end_list = len(self.list_data[1]) self.gtid_list = -1 # Reset the GTID list counter # Draw the screen self._refresh() # Wait for a key press or the interval to expire done = False while not done: # Disconnect the master while waiting for the interval to expire self.master.disconnect() # Wait for the interval to expire key = self._wait_for_interval() # Reconnect to the master self._reconnect_master(self.pingtime) if key is None: return None if key in ['Q', 'q']: return True else: # Refresh health on interval if self.report_mode == 'H': self.list_data = self._format_health_data() self._print_list() self._do_command(key) return False mysql-utilities-1.6.4/mysql/utilities/command/dbcopy.py0000755001577100752670000004135312747670311023033 0ustar pb2usercommon# # Copyright (c) 2010, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the copy database operation which ensures a database is exactly the same among two servers. """ import sys from mysql.utilities.exception import UtilError from mysql.utilities.common.database import Database from mysql.utilities.common.options import check_engine_options from mysql.utilities.common.server import connect_servers from mysql.utilities.command.dbexport import (get_change_master_command, get_copy_lock, get_gtid_commands) _RPL_COMMANDS, _RPL_FILE = 0, 1 _GTID_WARNING = ("# WARNING: The server supports GTIDs but you have elected " "to skip exexcuting the GTID_EXECUTED statement. Please " "refer to the MySQL online reference manual for more " "information about how to handle GTID enabled servers with " "backup and restore operations.") _GTID_BACKUP_WARNING = ("# WARNING: A partial copy from a server that has " "GTIDs enabled will by default include the GTIDs of " "all transactions, even those that changed suppressed " "parts of the database. If you don't want to generate " "the GTID statement, use the --skip-gtid option. To " "export all databases, use the --all option and do " "not specify a list of databases.") _NON_GTID_WARNING = ("# WARNING: The %s server does not support GTIDs yet the " "%s server does support GTIDs. To suppress this warning, " "use the --skip-gtid option when copying %s a non-GTID " "enabled server.") _CHECK_BLOBS_NOT_NULL = """ SELECT DISTINCT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE (COLUMN_TYPE LIKE '%BLOB%' OR COLUMN_TYPE LIKE '%TEXT%') AND IS_NULLABLE = 'NO' AND TABLE_SCHEMA IN ({0}); """ _BLOBS_NOT_NULL_ERROR = ("ERROR: The following tables have blob fields set to " "NOT NULL. The copy operation cannot proceed unless " "the blob fields permit NULL values. To copy data " "with NOT NULL blob fields, first remove the NOT " "NULL restriction, copy the data, then add the NOT " "NULL restriction using ALTER TABLE statements.") def check_blobs_not_null(server, db_list): """ Check for any blob fields that have NOT null set. Prints error message if any are encountered. server[in] Server class instance db_list[in] List of databases to be copied in form (src, dst) Returns: bool - True = blobs with NOT NULL, False = none found """ if not db_list: return False db_name_include = "" for db in db_list: if not db_name_include == "": db_name_include = "{0},".format(db_name_include) db_name_include = "{0}'{1}'".format(db_name_include, db[0]) res = server.exec_query(_CHECK_BLOBS_NOT_NULL.format(db_name_include)) if res: print(_BLOBS_NOT_NULL_ERROR) for row in res: print(" {0}.{1} Column {2}".format(row[0], row[1], row[2])) print return True return False def _copy_objects(source, destination, db_list, options, show_message=True, do_create=True): """Copy objects for a list of databases This method loops through a list of databases copying the objects as controlled by the skip options. source[in] Server class instance for source destination[in] Server class instance for destination options[in] copy options show_message[in] if True, display copy message Default = True do_create[in] if True, execute create statement for database Default = True """ # Copy objects for db_name in db_list: if show_message: # Display copy message if not options.get('quiet', False): msg = "# Copying database %s " % db_name[0] if db_name[1]: msg += "renamed as %s" % (db_name[1]) print msg # Get a Database class instance db = Database(source, db_name[0], options) # Perform the copy db.init() db.copy_objects(db_name[1], options, destination, options.get("threads", False), do_create) def multiprocess_db_copy_task(copy_db_task): """Multiprocess copy database method. This method wraps the copy_db method to allow its concurrent execution by a pool of processes. copy_db_task[in] dictionary of values required by a process to perform the database copy task, namely: {'source_srv': , 'dest_srv': , 'db_list': , 'options': , } """ # Get input values to execute task. source_srv = copy_db_task.get('source_srv') dest_srv = copy_db_task.get('dest_srv') db_list = copy_db_task.get('db_list') options = copy_db_task.get('options') # Execute copy databases task. # NOTE: Must handle any exception here, because worker processes will not # propagate them to the main process. try: copy_db(source_srv, dest_srv, db_list, options) except UtilError: _, err, _ = sys.exc_info() print("ERROR: {0}".format(err.errmsg)) def copy_db(src_val, dest_val, db_list, options): """Copy a database This method will copy a database and all of its objects and data from one server (source) to another (destination). Options are available to selectively ignore each type of object. The do_drop parameter is used to permit the copy to overwrite an existing destination database (default is to not overwrite). src_val[in] a dictionary containing connection information for the source including: (user, password, host, port, socket) dest_val[in] a dictionary containing connection information for the destination including: (user, password, host, port, socket) options[in] a dictionary containing the options for the copy: (skip_tables, skip_views, skip_triggers, skip_procs, skip_funcs, skip_events, skip_grants, skip_create, skip_data, verbose, do_drop, quiet, connections, debug, exclude_names, exclude_patterns) Notes: do_drop - if True, the database on the destination will be dropped if it exists (default is False) quiet - do not print any information during operation (default is False) Returns bool True = success, False = error """ verbose = options.get("verbose", False) quiet = options.get("quiet", False) do_drop = options.get("do_drop", False) skip_views = options.get("skip_views", False) skip_procs = options.get("skip_procs", False) skip_funcs = options.get("skip_funcs", False) skip_events = options.get("skip_events", False) skip_grants = options.get("skip_grants", False) skip_data = options.get("skip_data", False) skip_triggers = options.get("skip_triggers", False) skip_tables = options.get("skip_tables", False) skip_gtid = options.get("skip_gtid", False) locking = options.get("locking", "snapshot") conn_options = { 'quiet': quiet, 'version': "5.1.30", } servers = connect_servers(src_val, dest_val, conn_options) cloning = (src_val == dest_val) or dest_val is None source = servers[0] if cloning: destination = servers[0] else: destination = servers[1] # Test if SQL_MODE is 'NO_BACKSLASH_ESCAPES' in the destination server if destination.select_variable("SQL_MODE") == "NO_BACKSLASH_ESCAPES": print("# WARNING: The SQL_MODE in the destination server is " "'NO_BACKSLASH_ESCAPES', it will be changed temporarily " "for data insertion.") src_gtid = source.supports_gtid() == 'ON' dest_gtid = destination.supports_gtid() == 'ON'if destination else False # Get list of all databases from source if --all is specified. # Ignore system databases. if options.get("all", False): # The --all option is valid only if not cloning. if not cloning: if not quiet: print "# Including all databases." rows = source.get_all_databases() for row in rows: db_list.append((row[0], None)) # Keep same name else: raise UtilError("Cannot copy all databases on the same server.") elif not skip_gtid and src_gtid: # Check to see if this is a full copy (complete backup) all_dbs = source.exec_query("SHOW DATABASES") dbs = [db[0] for db in db_list] for db in all_dbs: if db[0].upper() in ["MYSQL", "INFORMATION_SCHEMA", "PERFORMANCE_SCHEMA", "SYS"]: continue if not db[0] in dbs: print _GTID_BACKUP_WARNING break # Do error checking and preliminary work: # - Check user permissions on source and destination for all databases # - Check to see if executing on same server but same db name (error) # - Build list of tables to lock for copying data (if no skipping data) # - Check storage engine compatibility for db_name in db_list: source_db = Database(source, db_name[0]) if destination is None: destination = source if db_name[1] is None: db = db_name[0] else: db = db_name[1] dest_db = Database(destination, db) # Make a dictionary of the options access_options = { 'skip_views': skip_views, 'skip_procs': skip_procs, 'skip_funcs': skip_funcs, 'skip_grants': skip_grants, 'skip_events': skip_events, 'skip_triggers': skip_triggers, } source_db.check_read_access(src_val["user"], src_val["host"], access_options) # Make a dictionary containing the list of objects from source db source_objects = { "views": source_db.get_db_objects("VIEW", columns="full"), "procs": source_db.get_db_objects("PROCEDURE", columns="full"), "funcs": source_db.get_db_objects("FUNCTION", columns="full"), "events": source_db.get_db_objects("EVENT", columns="full"), "triggers": source_db.get_db_objects("TRIGGER", columns="full"), } dest_db.check_write_access(dest_val['user'], dest_val['host'], access_options, source_objects, do_drop) # Error is source db and destination db are the same and we're cloning if destination == source and db_name[0] == db_name[1]: raise UtilError("Destination database name is same as " "source - source = %s, destination = %s" % (db_name[0], db_name[1])) # Error is source database does not exist if not source_db.exists(): raise UtilError("Source database does not exist - %s" % db_name[0]) # Check storage engines check_engine_options(destination, options.get("new_engine", None), options.get("def_engine", None), False, options.get("quiet", False)) # Get replication commands if rpl_mode specified. # if --rpl specified, dump replication initial commands rpl_info = None # Turn off foreign keys if they were on at the start destination.disable_foreign_key_checks(True) # Get GTID commands if not skip_gtid: gtid_info = get_gtid_commands(source) if src_gtid and not dest_gtid: print _NON_GTID_WARNING % ("destination", "source", "to") elif not src_gtid and dest_gtid: print _NON_GTID_WARNING % ("source", "destination", "from") else: gtid_info = None if src_gtid and not cloning: print _GTID_WARNING # If cloning, turn off gtid generation if gtid_info and cloning: gtid_info = None # if GTIDs enabled, write the GTID commands if gtid_info and dest_gtid: # Check GTID version for complete feature support destination.check_gtid_version() # Check the gtid_purged value too destination.check_gtid_executed() for cmd in gtid_info[0]: print "# GTID operation:", cmd destination.exec_query(cmd, {'fetch': False, 'commit': False}) if options.get("rpl_mode", None): new_opts = options.copy() new_opts['multiline'] = False new_opts['strict'] = True rpl_info = get_change_master_command(src_val, new_opts) destination.exec_query("STOP SLAVE", {'fetch': False, 'commit': False}) # Copy (create) objects. # We need to delay trigger and events to after data is loaded new_opts = options.copy() new_opts['skip_triggers'] = True new_opts['skip_events'] = True # Get the table locks unless we are cloning with lock-all if not (cloning and locking == 'lock-all'): my_lock = get_copy_lock(source, db_list, options, True) _copy_objects(source, destination, db_list, new_opts) # If we are cloning, take the write locks prior to copying data if cloning and locking == 'lock-all': my_lock = get_copy_lock(source, db_list, options, True, cloning) # Copy tables data if not skip_data and not skip_tables: # Copy tables for db_name in db_list: # Get a Database class instance db = Database(source, db_name[0], options) # Perform the copy # Note: No longer use threads, use multiprocessing instead. db.init() db.copy_data(db_name[1], options, destination, connections=1, src_con_val=src_val, dest_con_val=dest_val) # if cloning with lock-all unlock here to avoid system table lock conflicts if cloning and locking == 'lock-all': my_lock.unlock() # Create triggers for all databases if not skip_triggers: new_opts = options.copy() new_opts['skip_tables'] = True new_opts['skip_views'] = True new_opts['skip_procs'] = True new_opts['skip_funcs'] = True new_opts['skip_events'] = True new_opts['skip_grants'] = True new_opts['skip_create'] = True _copy_objects(source, destination, db_list, new_opts, False, False) # Create events for all databases if not skip_events: new_opts = options.copy() new_opts['skip_tables'] = True new_opts['skip_views'] = True new_opts['skip_procs'] = True new_opts['skip_funcs'] = True new_opts['skip_triggers'] = True new_opts['skip_grants'] = True new_opts['skip_create'] = True _copy_objects(source, destination, db_list, new_opts, False, False) if not (cloning and locking == 'lock-all'): my_lock.unlock() # if GTIDs enabled, write the GTID-related commands if gtid_info and dest_gtid: print "# GTID operation:", gtid_info[1] destination.exec_query(gtid_info[1]) if options.get("rpl_mode", None): for cmd in rpl_info[_RPL_COMMANDS]: if cmd[0] == '#' and not quiet: print cmd else: if verbose: print cmd destination.exec_query(cmd) destination.exec_query("START SLAVE;") # Turn on foreign keys if they were on at the start destination.disable_foreign_key_checks(False) if not quiet: print "#...done." return True mysql-utilities-1.6.4/mysql/utilities/command/proc.py0000644001577100752670000002115312747670311022507 0ustar pb2usercommon# # Copyright (c) 2010, 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains grep processing. """ import re import sys import mysql.connector from mysql.utilities.exception import EmptyResultError, FormatError from mysql.utilities.common.format import print_list from mysql.utilities.common.ip_parser import parse_connection from mysql.utilities.common.options import obj2sql from mysql.utilities.common.server import set_ssl_opts_in_connection_info KILL_QUERY, KILL_CONNECTION, PRINT_PROCESS = range(3) ID = "ID" USER = "USER" HOST = "HOST" DB = "DB" COMMAND = "COMMAND" TIME = "TIME" STATE = "STATE" INFO = "INFO" # # TODO : Can _spec and similar methods be shared for grep.py? # def _spec(info): """Create a server specification string from an info structure. """ result = "{user}:*@{host}:{port}".format(**info) if "unix_socket" in info: result += ":" + info["unix_socket"] return result _SELECT_PROC_FRM = """ SELECT Id, User, Host, Db, Command, Time, State, Info FROM INFORMATION_SCHEMA.PROCESSLIST{condition}""" def _make_select(matches, use_regexp, conditions): """Generate a SELECT statement for matching the processes. """ oper = 'REGEXP' if use_regexp else 'LIKE' for field, pattern in matches: conditions.append(" {0} {1} {2}" "".format(field, oper, obj2sql(pattern))) if len(conditions) > 0: condition = "\nWHERE\n" + "\n AND\n".join(conditions) else: condition = "" return _SELECT_PROC_FRM.format(condition=condition) # Map to map single-letter suffixes number of seconds _SECS = {'s': 1, 'm': 60, 'h': 3600, 'd': 24 * 3600, 'w': 7 * 24 * 3600} _INCORRECT_FORMAT_MSG = "'{0}' does not have correct format" def _make_age_cond(age): """Make age condition Accept an age description return a timedelta representing the age. We allow the forms: hh:mm:ss, mm:ss, 4h3m, with suffixes d (days), w (weeks), h (hours), m (minutes), and s(seconds) age[in] Age (time) Returns string - time delta """ mobj = re.match(r"([+-])?(?:(?:(\d?\d):)?(\d?\d):)?(\d?\d)\Z", age) if mobj: sign, hrs, mins, secs = mobj.groups() if not hrs: hrs = 0 if not mins: mins = 0 seconds = int(secs) + 60 * (int(mins) + 60 * int(hrs)) oper = "<=" if sign and sign == "-" else ">=" return ' {0} {1} {2}'.format(TIME, oper, seconds) mobj = re.match(r"([+-])?(\d+[dwhms])+", age) if mobj: sign = None if mobj.group(1): sign = age[0] age = age[1:] seconds = 0 periods = [x for x in re.split(r"(\d+[dwhms])", age)] if len(''.join(x[0::2])) > 0: # pylint: disable=W0631 raise FormatError(_INCORRECT_FORMAT_MSG.format(age)) for period in periods[1::2]: seconds += int(period[0:-1]) * _SECS[period[-1:]] oper = "<=" if sign and sign == "-" else ">=" return ' {0} {1} {2}'.format(TIME, oper, seconds) raise FormatError(_INCORRECT_FORMAT_MSG.format(age)) _KILL_BODY = """ DECLARE kill_done INT; DECLARE kill_cursor CURSOR FOR {select} OPEN kill_cursor; BEGIN DECLARE id BIGINT; DECLARE EXIT HANDLER FOR NOT FOUND SET kill_done = 1; kill_loop: LOOP FETCH kill_cursor INTO id; KILL {kill} id; END LOOP kill_loop; END; CLOSE kill_cursor;""" _KILL_PROCEDURE = """ CREATE PROCEDURE {name} () BEGIN{body} END""" class ProcessGrep(object): """Grep processing """ def __init__(self, matches, actions=None, use_regexp=False, age=None): """Constructor matches[in] matches identified actions[in] actions to perform use_regexp[in] if True, use regexp for compare default = False age[in] age in time, if provided default = None """ if actions is None: actions = [] conds = [_make_age_cond(age)] if age else [] self.__select = _make_select(matches, use_regexp, conds).strip() self.__actions = actions def sql(self, only_body=False): """Generate a SQL command for KILL This method generates the KILL SQL command for killing processes. It can also generate SQL to kill procedures by recreating them without a body (if only_body = True). only_body[in] if True, limit to body of object default = False Returns string - SQL statement """ params = { 'select': "\n ".join(self.__select.split("\n")), 'kill': 'CONNECTION' if KILL_CONNECTION in self.__actions else 'QUERY', } if KILL_CONNECTION in self.__actions or KILL_QUERY in self.__actions: sql = _KILL_BODY.format(**params) if not only_body: sql = _KILL_PROCEDURE.format( name="kill_processes", body="\n ".join(sql.split("\n")) ) return sql else: return self.__select def execute(self, connections, **kwrds): """Execute the search for processes, queries, or connections This method searches for processes, queriers, or connections to either kill or display the matches for one or more servers. connections[in] list of connection parameters kwrds[in] dictionary of options output file stream to display information default = sys.stdout connector connector to use default = mysql.connector format format for display default = GRID """ output = kwrds.get('output', sys.stdout) connector = kwrds.get('connector', mysql.connector) fmt = kwrds.get('format', "grid") charset = kwrds.get('charset', None) ssl_opts = kwrds.get('ssl_opts', {}) headers = ("Connection", "Id", "User", "Host", "Db", "Command", "Time", "State", "Info") entries = [] # Build SQL statement for info in connections: conn = parse_connection(info) if not conn: msg = "'%s' is not a valid connection specifier" % (info,) raise FormatError(msg) if charset: conn['charset'] = charset info = conn if connector == mysql.connector: set_ssl_opts_in_connection_info(ssl_opts, info) connection = connector.connect(**info) if not charset: # If no charset provided, get it from the # "character_set_client" server variable. cursor = connection.cursor() cursor.execute("SHOW VARIABLES LIKE 'character_set_client'") res = cursor.fetchall() connection.set_charset_collation(charset=str(res[0][1])) cursor.close() cursor = connection.cursor() cursor.execute(self.__select) print_rows = [] cols = ["Id", "User", "Host", "db", "Command", "Time", "State", "Info"] for row in cursor: if (KILL_QUERY in self.__actions) or \ (KILL_CONNECTION in self.__actions): print_rows.append(row) cursor.execute("KILL {0}".format(row[0])) if PRINT_PROCESS in self.__actions: entries.append(tuple([_spec(info)] + list(row))) if print_rows: print "# The following KILL commands were executed:" print_list(output, fmt, cols, print_rows) # If output is None, nothing is printed if len(entries) > 0 and output: entries.sort(key=lambda fifth: fifth[5]) print_list(output, fmt, headers, entries) elif PRINT_PROCESS in self.__actions: raise EmptyResultError("No matches found") mysql-utilities-1.6.4/mysql/utilities/command/rpl_sync_check.py0000644001577100752670000000674512747670311024544 0ustar pb2usercommon# # Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the command to check the data consistency in a replication topology. """ from mysql.utilities.common.rpl_sync import RPLSynchronizer def check_data_consistency(master_cnx_val, slaves_cnx_val, options, data_to_include=None, data_to_exclude=None, check_srv_versions=True): """ Check the data consistency of a replication topology. This function creates a replication synchronizer checker and checks the data consistency between the given list of servers. master_cnx_val[in] Dictionary with the connection values for the master. slaves_cnx_val[in] List of the dictionaries with the connection values for each slave. options[in] Dictionary of options (discover, verbosity, rpl_timeout, checksum_timeout, interval). data_to_include[in] Dictionary of data (set of tables) by database to check. data_to_exclude[in] Dictionary of data (set of tables) by database to exclude from the check. check_srv_versions[in] Flag indicating if the servers version check will be performed. By default True, meaning that differences between server versions will be reported. Returns the number of issues found during the consistency check. """ # Create replication synchronizer. rpl_sync = RPLSynchronizer(master_cnx_val, slaves_cnx_val, options) if check_srv_versions: # Check server versions and report differences. rpl_sync.check_server_versions() # Check GTID support, skipping slave with GTID disabled, and report # GTID executed differences between master and slaves. rpl_sync.check_gtid_sync() # Check data consistency and return the number of issues found. return rpl_sync.check_data_sync(options, data_to_include, data_to_exclude) def check_server_versions(master_cnx_val, slaves_cnx_val, options): """ Check the server versions of a replication topology. This method creates a replication synchronizer checker and compares the server versions of the given list of servers, reporting differences between them. master_cnx_val[in] Dictionary with the connection values for the master. slaves_cnx_val[in] List of the dictionaries with the connection values for each slave. options[in] Dictionary of options (discover, verbosity). """ # Create replication synchronizer. rpl_sync = RPLSynchronizer(master_cnx_val, slaves_cnx_val, options) # Check server versions and report differences. rpl_sync.check_server_versions() mysql-utilities-1.6.4/mysql/utilities/command/serverinfo.py0000644001577100752670000006143012747670311023730 0ustar pb2usercommon# # Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the reporting mechanisms for reporting disk usage. """ import getpass import os import shlex import subprocess import sys import tempfile import time from collections import defaultdict, namedtuple from itertools import chain from mysql.connector.errorcode import (ER_ACCESS_DENIED_ERROR, CR_CONNECTION_ERROR, CR_CONN_HOST_ERROR) from mysql.utilities.exception import UtilError from mysql.utilities.common.format import print_list from mysql.utilities.common.ip_parser import parse_connection from mysql.utilities.common.tools import get_tool_path, get_mysqld_version from mysql.utilities.common.server import (connect_servers, get_connection_dictionary, get_local_servers, Server, test_connect) log_file_tuple = namedtuple('log_file_tuple', "log_name log_file log_file_size") _LOG_FILES_VARIABLES = { 'error log': log_file_tuple('log_error', None, 'log_error_file_size'), 'general log': log_file_tuple('general_log', 'general_log_file', 'general_log_file_size'), 'slow query log': log_file_tuple('slow_query_log', 'slow_query_log_file', 'slow_query_log_file_size') } _SERVER_VARIABLES = ['version', 'datadir', 'basedir', 'plugin_dir'] _COLUMNS = ['server', 'config_file', 'binary_log', 'binary_log_pos', 'relay_log', 'relay_log_pos'] _WARNING_TEMPLATE = ("Unable to get information about '{0}' size. Please " "check if the file '{1}' exists or if you have the " "necessary Operating System permissions to access it.") # Add the values from server variables to the _COLUMNS list _COLUMNS.extend(_SERVER_VARIABLES) # Retrieve column names from the _LOG_FILES_VARIABLES, filter the # None value, sort them alphabetically and add them to the _COLUMNS list _COLUMNS.extend(sorted( val for val in chain(*_LOG_FILES_VARIABLES.values()) if val is not None) ) # Used to get O(1) performance in checking if an item is already present # in _COLUMNS _COLUMNS_SET = set(_COLUMNS) def _get_binlog(server): """Retrieve binary log and binary log position server[in] Server instance Returns tuple (binary log, binary log position) """ binlog, binlog_pos = '', '' res = server.exec_query("SHOW MASTER STATUS") if res != [] and res is not None: binlog = res[0][0] binlog_pos = res[0][1] return binlog, binlog_pos def _get_relay_log(server): """Retrieve relay log and relay log position server[in] Server instance Returns tuple (relay log, relay log position) """ relay_log, relay_log_pos = '', '' res = server.exec_query("SHOW SLAVE STATUS") if res != [] and res is not None: relay_log = res[0][7] relay_log_pos = res[0][8] return relay_log, relay_log_pos def _server_info(server_val, get_defaults=False, options=None): """Show information about a running server This method gathers information from a running server. This information is returned as a tuple to be displayed to the user in a format specified. The information returned includes the following: * server connection information * version number of the server * data directory path * base directory path * plugin directory path * configuration file location and name * current binary log file * current binary log position * current relay log file * current relay log position server_val[in] the server connection values or a connected server get_defaults[in] if True, get the default settings for the server options[in] options for connecting to the server Return tuple - information about server """ if options is None: options = {} # Parse source connection values source_values = parse_connection(server_val, None, options) # Connect to the server conn_options = { 'version': "5.1.30", } servers = connect_servers(source_values, None, conn_options) server = servers[0] params_dict = defaultdict(str) # Initialize list of warnings params_dict['warnings'] = [] # Identify server by string: 'host:port[:socket]'. server_id = "{0}:{1}".format(source_values['host'], source_values['port']) if source_values.get('socket', None): server_id = "{0}:{1}".format(server_id, source_values.get('socket')) params_dict['server'] = server_id # Get _SERVER_VARIABLES values from the server for server_var in _SERVER_VARIABLES: res = server.show_server_variable(server_var) if res: params_dict[server_var] = res[0][1] else: raise UtilError("Unable to determine {0} of server '{1}'" ".".format(server_var, server_id)) # Verify if the server is a local server. server_is_local = server.is_alias('localhost') # Get _LOG_FILES_VARIABLES values from the server for msg, log_tpl in _LOG_FILES_VARIABLES.iteritems(): res = server.show_server_variable(log_tpl.log_name) if res: # Check if log is turned off params_dict[log_tpl.log_name] = res[0][1] # If logs are turned off, skip checking information about the file if res[0][1] in ('', 'OFF'): continue # Logging is enabled, so we can get get information about log_file # unless it is log_error because in that case we already have it. if log_tpl.log_file is not None: # if it is not log_error log_file = server.show_server_variable( log_tpl.log_file)[0][1] params_dict[log_tpl.log_file] = log_file else: # log error, so log_file_name is already on params_dict log_file = params_dict[log_tpl.log_name] # Size can only be obtained from the files of a local server. if not server_is_local: params_dict[log_tpl.log_file_size] = 'UNAVAILABLE' # Show warning about log size unaviable. params_dict['warnings'].append("Unable to get information " "regarding variable '{0}' " "from a remote server." "".format(msg)) # If log file is stderr, we cannot get the correct size. elif log_file in ["stderr", "stdout"]: params_dict[log_tpl.log_file_size] = 'UNKNOWN' # Show warning about log unknown size. params_dict['warnings'].append("Unable to get size information" " from '{0}' for '{1}'." "".format(log_file, msg)) else: # Now get the information about the size of the logs try: # log_file might be a relative path, in which case we need # to prepend the datadir path to it if not os.path.isabs(log_file): log_file = os.path.join(params_dict['datadir'], log_file) params_dict[log_tpl.log_file_size] = "{0} bytes".format( os.path.getsize(log_file)) except os.error: # if we are unable to get the log_file_size params_dict[log_tpl.log_file_size] = '' warning_msg = _WARNING_TEMPLATE.format(msg, log_file) params_dict['warnings'].append(warning_msg) else: params_dict['warnings'].append("Unable to get information " "regarding variable '{0}'" ).format(msg) # if audit_log plugin is installed and enabled if server.supports_plugin('audit'): res = server.show_server_variable('audit_log_file') if res: # Audit_log variable might be a relative path to the datadir, # so it needs to be treated accordingly if not os.path.isabs(res[0][1]): params_dict['audit_log_file'] = os.path.join( params_dict['datadir'], res[0][1]) else: params_dict['audit_log_file'] = res[0][1] # Add audit_log field to the _COLUMNS List unless it is already # there if 'audit_log_file' not in _COLUMNS_SET: _COLUMNS.append('audit_log_file') _COLUMNS.append('audit_log_file_size') _COLUMNS_SET.add('audit_log_file') try: params_dict['audit_log_file_size'] = "{0} bytes".format( os.path.getsize(params_dict['audit_log_file'])) except os.error: # If we are unable to get the size of the audit_log_file params_dict['audit_log_file_size'] = '' warning_msg = _WARNING_TEMPLATE.format( "audit log", params_dict['audit_log_file'] ) params_dict['warnings'].append(warning_msg) # Build search path for config files if os.name == "posix": my_def_search = ["/etc/my.cnf", "/etc/mysql/my.cnf", os.path.join(params_dict['basedir'], "my.cnf"), "~/.my.cnf"] else: my_def_search = [r"c:\windows\my.ini", r"c:\my.ini", r"c:\my.cnf", os.path.join(os.curdir, "my.ini")] my_def_search.append(os.path.join(os.curdir, "my.cnf")) # Get server's default configuration values. defaults = [] if get_defaults: # Can only get defaults for local servers (need to access local data). if server_is_local: try: my_def_path = get_tool_path(params_dict['basedir'], "my_print_defaults", quote=True) except UtilError as err: raise UtilError("Unable to retrieve the defaults data " "(requires access to my_print_defaults): {0} " "(basedir: {1})".format(err.errmsg, params_dict['basedir']) ) out_file = tempfile.TemporaryFile() # Execute tool: /my_print_defaults mysqld cmd_list = shlex.split(my_def_path) cmd_list.append("mysqld") subprocess.call(cmd_list, stdout=out_file) out_file.seek(0) # Get defaults data from temp output file. defaults.append("\nDefaults for server {0}".format(server_id)) for line in out_file.readlines(): defaults.append(line.rstrip()) else: # Remote server; Cannot get the defaults data. defaults.append("\nWARNING: The utility can not get defaults from " "a remote host.") # Find config file config_file = "" for search_path in my_def_search: if os.path.exists(search_path): if len(config_file) > 0: config_file = "{0}, {1}".format(config_file, search_path) else: config_file = search_path params_dict['config_file'] = config_file # Find binary log, relay log params_dict['binary_log'], params_dict['binary_log_pos'] = _get_binlog( server) params_dict['relay_log'], params_dict['relay_log_pos'] = _get_relay_log( server) server.disconnect() return params_dict, defaults def _start_server(server_val, basedir, datadir, options=None): """Start an instance of a server in read only mode This method is used to start the server in read only mode. It will launch the server with --skip-grant-tables and --read_only options set. Caller must stop the server with _stop_server(). server_val[in] dictionary of server connection values basedir[in] the base directory for the server datadir[in] the data directory for the server options[in] dictionary of options (verbosity) """ if options is None: options = {} verbosity = options.get("verbosity", 0) start_timeout = options.get("start_timeout", 10) mysqld_path = get_tool_path(basedir, "mysqld", quote=True) print "# Server is offline." # Check server version print "# Checking server version ...", version = get_mysqld_version(mysqld_path) print "done." if version is not None and int(version[0]) >= 5: post_5_5 = int(version[1]) >= 5 post_5_6 = int(version[1]) >= 6 post_5_7_4 = int(version[1]) >= 7 and int(version[2]) > 4 else: print("# Warning: cannot get server version.") post_5_5 = False post_5_6 = False post_5_7_4 = False # Get the user executing the utility to use in the mysqld options. # Note: the option --user=user_name is mandatory to start mysqld as root. user_name = getpass.getuser() # Start the instance if verbosity > 0: print "# Starting read-only instance of the server ..." print "# --- BEGIN (server output) ---" else: print "# Starting read-only instance of the server ...", args = shlex.split(mysqld_path) args.extend([ "--no-defaults", "--skip-grant-tables", "--read_only", "--port=%(port)s" % server_val, "--basedir=" + basedir, "--datadir=" + datadir, "--user={0}".format(user_name), ]) # It the server is 5.6 or later, we must use additional parameters if post_5_5: server_args = [ "--skip-slave-start", "--default-storage-engine=MYISAM", "--server-id=0", ] if post_5_6: server_args.append("--default-tmp-storage-engine=MYISAM") if not post_5_7_4: server_args.append("--skip-innodb") args.extend(server_args) socket = server_val.get('unix_socket', None) if not socket and post_5_7_4 and os.name == "posix": socket = os.path.normpath(os.path.join(datadir, "mysql.sock")) if socket is not None: args.append("--socket={0}".format(socket)) if verbosity > 0: subprocess.Popen(args, shell=False) else: out = open(os.devnull, 'w') subprocess.Popen(args, shell=False, stdout=out, stderr=out) server_options = { 'conn_info': server_val, 'role': "read_only", } server = Server(server_options) # Try to connect to the server, waiting for the server to become ready # (retry start_timeout times and wait 1 sec between each attempt). # Note: It can take up to 10 seconds for Windows machines. i = 0 while i < start_timeout: # Reset error and wait 1 second. error = None time.sleep(1) try: server.connect() break # Server ready (connect succeed)! Exit the for loop. except UtilError as err: # Store exception to raise later (if needed). error = err i += 1 # Indicate end of the server output. if verbosity > 0: print "# --- END (server output) ---" # Raise last known exception (if unable to connect to the server) if error: # See: http://www.logilab.org/ticket/3207 # pylint: disable=E0702 raise error if verbosity > 0: print "# done (server started)." else: print "done." return server def _stop_server(server_val, basedir, options=None): """Stop an instance of a server started in read only mode This method is used to stop the server started in read only mode. It will launch mysqladmin to stop the server. Caller must start the server with _start_server(). server_val[in] dictionary of server connection values basedir[in] the base directory for the server options[in] dictionary of options (verbosity) """ if options is None: options = {} verbosity = options.get("verbosity", 0) socket = server_val.get("unix_socket", None) mysqladmin_path = get_tool_path(basedir, "mysqladmin", quote=True) # Stop the instance if verbosity > 0: print "# Shutting down server ..." print "# --- BEGIN (server output) ---" else: print "# Shutting down server ...", if os.name == "posix": cmd = mysqladmin_path + " shutdown -uroot " if socket is not None: cmd = cmd + " --socket=%s " % socket else: cmd = mysqladmin_path + " shutdown -uroot " + \ " --port=%(port)s" % server_val if verbosity > 0: proc = subprocess.Popen(cmd, shell=True) else: fnull = open(os.devnull, 'w') proc = subprocess.Popen(cmd, shell=True, stdout=fnull, stderr=fnull) # Wait for subprocess to finish proc.wait() if verbosity > 0: print "# --- END (server output) ---" print "# done (server stopped)." else: print "done." def _show_running_servers(start=3306, end=3333): """Display a list of running MySQL servers. start[in] starting port for Windows servers end[in] ending port for Windows servers """ print "# " processes = get_local_servers(True, start, end) if len(processes) > 0: print "# The following MySQL servers are active on this host:" for process in processes: if os.name == "posix": print "# Process id: %6d, Data path: %s" % \ (int(process[0]), process[1]) elif os.name == "nt": print "# Process id: %6d, Port: %s" % \ (int(process[0]), process[1]) else: print "# No active MySQL servers found." print "# " def show_server_info(servers, options): """Show server information for a list of servers This method will gather information about a running server. If the show_defaults option is specified, the method will also read the configuration file and return a list of the server default settings. If the format option is set, the output will be in the format specified. If the no_headers option is set, the output will not have a header row (no column names) except for format = vertical. If the basedir and start options are set, the method will attempt to start the server in read only mode to get the information. Specifying only basedir will not start the server. The extra start option is designed to make sure the user wants to start the offline server. The user may not wish to do this if there are certain error conditions and/or logs in place that may be overwritten. servers[in] list of server connections in the form :@:: options[in] dictionary of options (no_headers, format, basedir, start, show_defaults) Returns tuple ((server information), defaults) """ no_headers = options.get("no_headers", False) fmt = options.get("format", "grid") show_defaults = options.get("show_defaults", False) basedir = options.get("basedir", None) datadir = options.get("datadir", None) start = options.get("start", False) show_servers = options.get("show_servers", 0) if show_servers: if os.name == 'nt': ports = options.get("ports", "3306:3333") start_p, end_p = ports.split(":") _show_running_servers(start_p, end_p) else: _show_running_servers() ssl_dict = {} ssl_dict['ssl_cert'] = options.get("ssl_cert", None) ssl_dict['ssl_ca'] = options.get("ssl_ca", None) ssl_dict['ssl_key'] = options.get("ssl_key", None) ssl_dict['ssl'] = options.get("ssl", None) row_dict_lst = [] warnings = [] server_val = {} for server in servers: new_server = None try: test_connect(server, throw_errors=True, ssl_dict=ssl_dict) except UtilError as util_error: conn_dict = get_connection_dictionary(server, ssl_dict=ssl_dict) server1 = Server(options={'conn_info': conn_dict}) server_is_off = False # If we got errno 2002 it means can not connect through the # given socket. if util_error.errno == CR_CONNECTION_ERROR: socket = conn_dict.get("unix_socket", "") if socket: msg = ("Unable to connect to server using socket " "'{0}'.".format(socket)) if os.path.isfile(socket): err_msg = ["{0} Socket file is not valid.".format(msg)] else: err_msg = ["{0} Socket file does not " "exist.".format(msg)] # If we got errno 2003 and we do not have # socket, instead we check if server is localhost. elif (util_error.errno == CR_CONN_HOST_ERROR and server1.is_alias("localhost")): server_is_off = True # If we got errno 1045 it means Access denied, # notify the user if a password was used or not. elif util_error.errno == ER_ACCESS_DENIED_ERROR: use_pass = 'YES' if conn_dict['passwd'] else 'NO' err_msg = ("Access denied for user '{0}'@'{1}' using " "password: {2}".format(conn_dict['user'], conn_dict['host'], use_pass)) # Use the error message from the connection attempt. else: err_msg = [util_error.errmsg] # To propose to start a cloned server for extract the info, # can not predict if the server is really off, but we can do it # in case of socket error, or if one of the related # parameter was given. if server_is_off or basedir or datadir or start: err_msg = ["Server is offline. To connect, " "you must also provide "] opts = ["basedir", "datadir", "start"] for opt in tuple(opts): try: if locals()[opt] is not None: opts.remove(opt) except KeyError: pass if opts: err_msg.append(", ".join(opts[0:-1])) if len(opts) > 1: err_msg.append(" and the ") err_msg.append(opts[-1]) err_msg.append(" option") raise UtilError("".join(err_msg)) if not start: raise UtilError("".join(err_msg)) else: try: server_val = parse_connection(server, None, options) except: raise UtilError("Source connection values invalid" " or cannot be parsed.") new_server = _start_server(server_val, basedir, datadir, options) info_dict, defaults = _server_info(server, show_defaults, options) warnings.extend(info_dict['warnings']) if info_dict: row_dict_lst.append(info_dict) if new_server: # Need to stop the server! new_server.disconnect() _stop_server(server_val, basedir, options) # Get the row values stored in the dictionaries rows = [[row_dict[key] for key in _COLUMNS] for row_dict in row_dict_lst] print_list(sys.stdout, fmt, _COLUMNS, rows, no_headers) if warnings: print("\n# List of Warnings: \n") for warning in warnings: print("WARNING: {0}\n".format(warning)) # Print the default configurations. if show_defaults and len(defaults) > 0: for row in defaults: print(" {0}".format(row)) mysql-utilities-1.6.4/mysql/utilities/command/failover_daemon.py0000644001577100752670000006644412747670311024712 0ustar pb2usercommon# # Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the automatic failover daemon. It contains the daemon mechanism for the automatic failover feature for replication. """ import os import sys import time import logging from mysql.utilities.common.daemon import Daemon from mysql.utilities.common.tools import ping_host, execute_script from mysql.utilities.common.messages import HOST_IP_WARNING from mysql.utilities.exception import UtilRplError _GTID_LISTS = ["Transactions executed on the servers:", "Transactions purged from the servers:", "Transactions owned by another server:"] _GEN_UUID_COLS = ["host", "port", "role", "uuid"] _GEN_GTID_COLS = ["host", "port", "role", "gtid"] _DROP_FC_TABLE = "DROP TABLE IF EXISTS mysql.failover_console" _CREATE_FC_TABLE = ("CREATE TABLE IF NOT EXISTS mysql.failover_console " "(host char(255), port char(10))") _SELECT_FC_TABLE = ("SELECT * FROM mysql.failover_console WHERE host = '{0}' " "AND port = '{1}'") _INSERT_FC_TABLE = "INSERT INTO mysql.failover_console VALUES ('{0}', '{1}')" _DELETE_FC_TABLE = ("DELETE FROM mysql.failover_console WHERE host = '{0}' " "AND port = '{1}'") _FAILOVER_ERROR = ("{0}Check server for errors and run the mysqlrpladmin " "utility to perform manual failover.") _FAILOVER_ERRNO = 911 _ERRANT_TNX_ERROR = "Errant transaction(s) found on slave(s)." class FailoverDaemon(Daemon): """Automatic Failover Daemon This class implements a POSIX daemon, that logs information about the master and the replication health for the topology. """ def __init__(self, rpl, umask=0, chdir="/", stdin=None, stdout=None, stderr=None): """Constructor rpl[in] a RplCommands class instance umask[in] posix umask chdir[in] working directory stdin[in] standard input object stdout[in] standard output object stderr[in] standard error object """ pidfile = rpl.options.get("pidfile", None) if pidfile is None: pidfile = "./failover_daemon.pid" super(FailoverDaemon, self).__init__(pidfile) self.rpl = rpl self.options = rpl.options self.interval = int(self.options.get("interval", 15)) self.pingtime = int(self.options.get("pingtime", 3)) self.force = self.options.get("force", False) self.mode = self.options.get("failover_mode", "auto") self.old_mode = None # Dictionary that holds the current warning messages self.warnings_dic = {} # Callback methods for reading data self.master = self.rpl.topology.master self.get_health_data = self.rpl.topology.get_health self.get_gtid_data = self.rpl.topology.get_gtid_data self.get_uuid_data = self.rpl.topology.get_server_uuids self.list_data = None self.master_gtids = [] self.report_values = [ report.lower() for report in self.options["report_values"].split(",") ] def _report(self, message, level=logging.INFO, print_msg=True): """Log message if logging is on. This method will log the message presented if the log is turned on. Specifically, if options['log_file'] is not None. It will also print the message to stdout. message[in] message to be printed level[in] level of message to log. Default = INFO print_msg[in] if True, print the message to stdout. Default = True """ # First, print the message. if print_msg and not self.rpl.quiet: print(message) # Now log message if logging turned on if self.rpl.logging: logging.log(int(level), message.strip("#").strip(" ")) def _print_warnings(self): """Print current warning messages. This method displays current warning messages if they exist. """ # Only do something if warnings exist. if self.warnings_dic: for msg in self.warnings_dic.itervalues(): print("# WARNING: {0}".format(msg)) def _format_health_data(self): """Return health data from topology. Returns tuple - (columns, rows) """ if self.get_health_data is not None: try: return self.get_health_data() except Exception as err: msg = "Cannot get health data: {0}".format(err) self._report(msg, logging.ERROR) raise UtilRplError(msg) return ([], []) def _format_uuid_data(self): """Return the server's uuids. Returns tuple - (columns, rows) """ if self.get_uuid_data is not None: try: return (_GEN_UUID_COLS, self.get_uuid_data()) except Exception as err: msg = "Cannot get UUID data: {0}".format(err) self._report(msg, logging.ERROR) raise UtilRplError(msg) return ([], []) def _format_gtid_data(self): """Return the GTID information from the topology. Returns tuple - (columns, rows) """ if self.get_gtid_data is not None: try: return (_GEN_GTID_COLS, self.get_gtid_data()) except Exception as err: msg = "Cannot get GTID data: {0}".format(err) self._report(msg, logging.ERROR) raise UtilRplError(msg) return ([], []) def _log_master_status(self): """Logs the master information This method logs the master information from SHOW MASTER STATUS. """ # If no master present, don't print anything. if self.master is None: return logging.info("Master Information") try: status = self.master.get_status()[0] except: msg = "Cannot get master status" self._report(msg, logging.ERROR) raise UtilRplError(msg) cols = ("Binary Log File", "Position", "Binlog_Do_DB", "Binlog_Ignore_DB") rows = (status[0] or "N/A", status[1] or "N/A", status[2] or "N/A", status[3] or "N/A") logging.info( ", ".join(["{0}: {1}".format(*item) for item in zip(cols, rows)]) ) # Display gtid executed set self.master_gtids = [] for gtid in status[4].split("\n"): if gtid: # Add each GTID to a tuple to match the required format to # print the full GRID list correctly. self.master_gtids.append((gtid.strip(","),)) try: if len(self.master_gtids) > 1: gtid_executed = "{0}[...]".format(self.master_gtids[0][0]) else: gtid_executed = self.master_gtids[0][0] except IndexError: gtid_executed = "None" logging.info("GTID Executed Set: {0}".format(gtid_executed)) @staticmethod def _log_data(title, labels, data): """Helper method to log data. title[in] title to log labels[in] list of labels data[in] list of data rows """ logging.info(title) for row in data: msg = ", ".join( ["{0}: {1}".format(*col) for col in zip(labels, row)] ) logging.info(msg) def _reconnect_master(self, pingtime=3): """Tries to reconnect to the master This method tries to reconnect to the master and if connection fails after 3 attemps, returns False. """ if self.master and self.master.is_alive(): return True is_connected = False i = 0 while i < 3: try: self.master.connect() is_connected = True break except: pass time.sleep(pingtime) i += 1 return is_connected def add_warning(self, warning_key, warning_msg): """Add a warning message to the current dictionary of warnings. warning_key[in] key associated with the warning message to add. warning_msg[in] warning message to add to the current dictionary of warnings. """ self.warnings_dic[warning_key] = warning_msg def del_warning(self, warning_key): """Remove a warning message from the current dictionary of warnings. warning_key[in] key associated with the warning message to remove. """ if warning_key in self.warnings_dic: del self.warnings_dic[warning_key] def check_instance(self): """Check registration of the console This method unregisters existing instances from slaves and attempts to register the instance on the master. If there is already an instance on the master, failover mode will be changed to 'fail'. """ # Unregister existing instances from slaves self._report("Unregistering existing instances from slaves.", logging.INFO, False) self.unregister_slaves(self.rpl.topology) # Register instance self._report("Registering instance on master.", logging.INFO, False) old_mode = self.mode failover_mode = self.register_instance(self.force) if failover_mode != old_mode: # Turn on sys.stdout sys.stdout = self.rpl.stdout_copy msg = ("Multiple instances of failover daemon found for master " "{0}:{1}.".format(self.master.host, self.master.port)) self._report(msg, logging.WARN) print("If this is an error, restart the daemon with --force.") print("Failover mode changed to 'FAIL' for this instance.") print("Daemon will start in 10 seconds.") sys.stdout.flush() i = 0 while i < 9: time.sleep(1) sys.stdout.write(".") sys.stdout.flush() i += 1 print("starting Daemon.") # Turn off sys.stdout sys.stdout = self.rpl.stdout_devnull time.sleep(1) def register_instance(self, clear=False, register=True): """Register the daemon as running on the master. This method will attempt to register the daemon as running against the master for failover modes auto or elect. If another daemon is already registered, this instance becomes blocked resulting in the mode change to 'fail' and failover will not occur when this instance of the daemon detects failover. clear[in] if True, clear the sentinel database entries on the master. Default is False. register[in] if True, register the daemon on the master. If False, unregister the daemon on the master. Default is True. Returns string - new mode if changed """ # We cannot check disconnected masters and do not need to check if # we are doing a simple fail mode. if self.master is None or self.mode == "fail": return self.mode # Turn binary log off first self.master.toggle_binlog("DISABLE") host_port = (self.master.host, self.master.port) # Drop the table if specified if clear: self.master.exec_query(_DROP_FC_TABLE) # Register the daemon if register: res = self.master.exec_query(_CREATE_FC_TABLE) res = self.master.exec_query(_SELECT_FC_TABLE.format(*host_port)) # COMMIT to close session before enabling binlog. self.master.commit() if res != []: # Someone beat us there. Drat. self.old_mode = self.mode self.mode = "fail" else: # We're first! Yippee. res = self.master.exec_query( _INSERT_FC_TABLE.format(*host_port)) # Unregister the daemon if our mode was changed elif self.old_mode != self.mode: res = self.master.exec_query(_DELETE_FC_TABLE.format(*host_port)) # Turn binary log on self.master.toggle_binlog("ENABLE") return self.mode def unregister_slaves(self, topology): """Unregister the daemon as running on the slaves. This method will unregister the daemon that was previously registered on the slaves, for failover modes auto or elect. """ if self.master is None or self.mode == "fail": return for slave_dict in topology.slaves: # Skip unreachable/not connected slaves. slave_instance = slave_dict["instance"] if slave_instance and slave_instance.is_alive(): # Turn binary log off first slave_instance.toggle_binlog("DISABLE") # Drop failover instance registration table. slave_instance.exec_query(_DROP_FC_TABLE) # Turn binary log on slave_instance.toggle_binlog("ENABLE") def run(self): """Run automatic failover. This method implements the automatic failover facility. It the existing failover() method of the RplCommands class to conduct failover. When the master goes down, the method can perform one of three actions: 1) failover to list of candidates first then slaves 2) failover to list of candidates only 3) fail rpl[in] instance of the RplCommands class interval[in] time in seconds to wait to check status of servers Returns bool - True = success, raises exception on error """ failover_mode = self.mode pingtime = self.options.get("pingtime", 3) exec_fail = self.options.get("exec_fail", None) post_fail = self.options.get("post_fail", None) pedantic = self.options.get("pedantic", False) # Only works for GTID_MODE=ON if not self.rpl.topology.gtid_enabled(): msg = ("Topology must support global transaction ids and have " "GTID_MODE=ON.") self._report(msg, logging.CRITICAL) raise UtilRplError(msg) # Require --master-info-repository=TABLE for all slaves if not self.rpl.topology.check_master_info_type("TABLE"): msg = ("Failover requires --master-info-repository=TABLE for " "all slaves.") self._report(msg, logging.ERROR, False) raise UtilRplError(msg) # Check for mixing IP and hostnames if not self.rpl.check_host_references(): print("# WARNING: {0}".format(HOST_IP_WARNING)) self._report(HOST_IP_WARNING, logging.WARN, False) print("#\n# Failover daemon will start in 10 seconds.") time.sleep(10) # Test failover script. If it doesn't exist, fail. no_exec_fail_msg = ("Failover check script cannot be found. Please " "check the path and filename for accuracy and " "restart the failover daemon.") if exec_fail is not None and not os.path.exists(exec_fail): self._report(no_exec_fail_msg, logging.CRITICAL, False) raise UtilRplError(no_exec_fail_msg) # Check existence of errant transactions on slaves errant_tnx = self.rpl.topology.find_errant_transactions() if errant_tnx: print("# WARNING: {0}".format(_ERRANT_TNX_ERROR)) self._report(_ERRANT_TNX_ERROR, logging.WARN, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) print("# {0}".format(errant_msg)) self._report(errant_msg, logging.WARN, False) # Raise an exception (to stop) if pedantic mode is ON if pedantic: msg = ("{0} Note: If you want to ignore this issue, please do " "not use the --pedantic option." "".format(_ERRANT_TNX_ERROR)) self._report(msg, logging.CRITICAL) raise UtilRplError(msg) self._report("Failover daemon started.", logging.INFO, False) self._report("Failover mode = {0}.".format(failover_mode), logging.INFO, False) # Main loop - loop and fire on interval. done = False first_pass = True failover = False while not done: # Use try block in case master class has gone away. try: old_host = self.rpl.master.host old_port = self.rpl.master.port except: old_host = "UNKNOWN" old_port = "UNKNOWN" # If a failover script is provided, check it else check master # using connectivity checks. if exec_fail is not None: # Execute failover check script if not os.path.exists(exec_fail): self._report(no_exec_fail_msg, logging.CRITICAL, False) raise UtilRplError(no_exec_fail_msg) else: self._report("# Spawning external script for failover " "checking.") res = execute_script(exec_fail, None, [old_host, old_port], self.rpl.verbose) if res == 0: self._report("# Failover check script completed " "Ok. Failover averted.") else: self._report("# Failover check script failed. " "Failover initiated", logging.WARN) failover = True else: # Check the master. If not alive, wait for pingtime seconds # and try again. if self.rpl.topology.master is not None and \ not self.rpl.topology.master.is_alive(): msg = ("Master may be down. Waiting for {0} seconds." "".format(pingtime)) self._report(msg, logging.INFO, False) time.sleep(pingtime) try: self.rpl.topology.master.connect() except: pass # Check the master again. If no connection or lost connection, # try ping. This performs the timeout threshold for detecting # a down master. If still not alive, try to reconnect and if # connection fails after 3 attempts, failover. if self.rpl.topology.master is None or \ not ping_host(self.rpl.topology.master.host, pingtime) or \ not self.rpl.topology.master.is_alive(): failover = True if self._reconnect_master(self.pingtime): failover = False # Master is now connected again if failover: self._report("Failed to reconnect to the master after " "3 attemps.", logging.INFO) if failover: self._report("Master is confirmed to be down or " "unreachable.", logging.CRITICAL, False) try: self.rpl.topology.master.disconnect() except: pass if failover_mode == "auto": self._report("Failover starting in 'auto' mode...") res = self.rpl.topology.failover(self.rpl.candidates, False) elif failover_mode == "elect": self._report("Failover starting in 'elect' mode...") res = self.rpl.topology.failover(self.rpl.candidates, True) else: msg = _FAILOVER_ERROR.format("Master has failed and " "automatic failover is " "not enabled. ") self._report(msg, logging.CRITICAL, False) # Execute post failover script self.rpl.topology.run_script(post_fail, False, [old_host, old_port]) raise UtilRplError(msg, _FAILOVER_ERRNO) if not res: msg = _FAILOVER_ERROR.format("An error was encountered " "during failover. ") self._report(msg, logging.CRITICAL, False) # Execute post failover script self.rpl.topology.run_script(post_fail, False, [old_host, old_port]) raise UtilRplError(msg) self.rpl.master = self.rpl.topology.master self.master = self.rpl.master self.rpl.topology.remove_discovered_slaves() self.rpl.topology.discover_slaves() self.list_data = None print("\nFailover daemon will restart in 5 seconds.") time.sleep(5) failover = False # Execute post failover script self.rpl.topology.run_script(post_fail, False, [old_host, old_port, self.rpl.master.host, self.rpl.master.port]) # Unregister existing instances from slaves self._report("Unregistering existing instances from slaves.", logging.INFO, False) self.unregister_slaves(self.rpl.topology) # Register instance on the new master msg = ("Registering instance on new master " "{0}:{1}.").format(self.master.host, self.master.port) self._report(msg, logging.INFO, False) failover_mode = self.register_instance() # discover slaves if option was specified at startup elif (self.options.get("discover", None) is not None and not first_pass): # Force refresh of health list if new slaves found if self.rpl.topology.discover_slaves(): self.list_data = None # Check existence of errant transactions on slaves errant_tnx = self.rpl.topology.find_errant_transactions() if errant_tnx: if pedantic: print("# WARNING: {0}".format(_ERRANT_TNX_ERROR)) self._report(_ERRANT_TNX_ERROR, logging.WARN, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) print("# {0}".format(errant_msg)) self._report(errant_msg, logging.WARN, False) # Raise an exception (to stop) if pedantic mode is ON raise UtilRplError("{0} Note: If you want to ignore this " "issue, please do not use the " "--pedantic " "option.".format(_ERRANT_TNX_ERROR)) else: if self.rpl.logging: warn_msg = ("{0} Check log for more " "details.".format(_ERRANT_TNX_ERROR)) else: warn_msg = _ERRANT_TNX_ERROR self.add_warning("errant_tnx", warn_msg) self._report(_ERRANT_TNX_ERROR, logging.WARN, False) for host, port, tnx_set in errant_tnx: errant_msg = (" - For slave '{0}@{1}': " "{2}".format(host, port, ", ".join(tnx_set))) self._report(errant_msg, logging.WARN, False) else: self.del_warning("errant_tnx") if self.master and self.master.is_alive(): # Log status self._print_warnings() self._log_master_status() self.list_data = [] if "health" in self.report_values: (health_labels, health_data) = self._format_health_data() if health_data: self._log_data("Health Status:", health_labels, health_data) if "gtid" in self.report_values: (gtid_labels, gtid_data) = self._format_gtid_data() for i, v in enumerate(gtid_data): if v: self._log_data("GTID Status - {0}" "".format(_GTID_LISTS[i]), gtid_labels, v) if "uuid" in self.report_values: (uuid_labels, uuid_data) = self._format_uuid_data() if uuid_data: self._log_data("UUID Status:", uuid_labels, uuid_data) # Disconnect the master while waiting for the interval to expire self.master.disconnect() # Wait for the interval to expire time.sleep(self.interval) # Reconnect to the master self._reconnect_master(self.pingtime) first_pass = False return True def start(self, detach_process=True): """Starts the daemon. Runs the automatic failover, it will start the daemon if detach_process is True. """ # Check privileges self._report("# Checking privileges.") errors = self.rpl.topology.check_privileges(self.mode != "fail") if len(errors): msg = ("User {0} on {1} does not have sufficient privileges to " "execute the {2} command.") for error in errors: self._report(msg.format(error[0], error[1], "failover"), logging.CRITICAL) raise UtilRplError("Not enough privileges to execute command.") # Check failover instances running self.check_instance() # Start the daemon return super(FailoverDaemon, self).start(detach_process) mysql-utilities-1.6.4/mysql/utilities/command/dbcompare.py0000644001577100752670000005176612747670311023515 0ustar pb2usercommon# # Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the commands for checking consistency of two databases. """ from mysql.utilities.exception import UtilDBError, UtilError from mysql.utilities.common.database import Database from mysql.utilities.common.sql_transform import quote_with_backticks from mysql.utilities.common.dbcompare import (diff_objects, get_common_objects, get_create_object, print_missing_list, server_connect, check_consistency, build_diff_list, DEFAULT_SPAN_KEY_SIZE) from mysql.utilities.common.server import connect_servers _PRINT_WIDTH = 75 _ROW_FORMAT = "# {0:{1}} {2:{3}} {4:{5}} {6:{7}} {8:{9}}" _RPT_FORMAT = "{0:{1}} {2:{3}}" _ERROR_DB_DIFF = "The object definitions do not match." _ERROR_DB_MISSING = "The database {0} does not exist." _ERROR_OBJECT_LIST = "The list of objects differs among database {0} and {1}." _ERROR_ROW_COUNT = "Row counts are not the same among {0} and {1}.\n#" _ERROR_DB_MISSING_ON_SERVER = "The database {0} on {1} does not exist on {2}." _DEFAULT_OPTIONS = { "quiet": False, "verbosity": 0, "difftype": "differ", "run_all_tests": False, "width": 75, "no_object_check": False, "no_diff": False, "no_row_count": False, "no_data": False, "transform": False, "span_key_size": DEFAULT_SPAN_KEY_SIZE } class _CompareDBReport(object): """Print compare database report """ def __init__(self, options): """Constructor options[in] options for class width[in] Width of report quiet[in] If true, do not print commentary (default = False) """ self.width = options.get('width', _PRINT_WIDTH) - 2 # for '# ' self.quiet = options.get('quiet', False) self.type_width = 9 self.oper_width = 7 self.desc_width = self.width - self.type_width - \ (3 * self.oper_width) - 4 def print_heading(self): """Print heading for database consistency """ # Skip if quiet if self.quiet: return # Set the variable width global parameters here print _ROW_FORMAT.format(' ', self.type_width, ' ', self.desc_width, "Defn", self.oper_width, "Row", self.oper_width, "Data", self.oper_width) print _ROW_FORMAT.format("Type", self.type_width, "Object Name", self.desc_width, "Diff", self.oper_width, "Count", self.oper_width, "Check", self.oper_width) print "# %s" % ('-' * self.width), def report_object(self, obj_type, description): """Print the object type and description field obj_type[in] type of the object(s) described description[in] description of object(s) """ # Skip if quiet if self.quiet: return print "\n#", _RPT_FORMAT.format(obj_type, self.type_width, description, self.desc_width), def report_state(self, state): """Print the results of a test. state[in] state of the test """ # Skip if quiet if self.quiet: return print "{0:<{1}}".format(state, self.oper_width), @staticmethod def report_errors(errors): """Print any errors encountered. errors[in] list of strings to print """ if len(errors) > 0: print "\n#" for line in errors: print line def _check_databases(server1, server2, db1, db2, options): """Check databases server1[in] first server Server instance server2[in] second server Server instance db1[in] first database db2[in] second database options[in] options dictionary Returns tuple - Database class instances for databases """ # Check database create for differences if not options['no_diff']: # temporarily make the diff quiet to retrieve errors new_opt = {} new_opt.update(options) new_opt['quiet'] = True # do not print messages new_opt['suppress_sql'] = True # do not print SQL statements either res = diff_objects(server1, server2, db1, db2, new_opt, 'DATABASE') if res is not None: for row in res: print row print if not options['run_all_tests']: raise UtilError(_ERROR_DB_DIFF) def _check_objects(server1, server2, db1, db2, db1_conn, db2_conn, options): """Check number of objects server1[in] first server Server instance server2[in] second server Server instance db1[in] first database db2[in] second database db1_conn[in] first Database instance db2_conn[in] second Database instance options[in] options dictionary Returns list of objects in both databases """ differs = False # Check for same number of objects in_both, in_db1, in_db2 = get_common_objects(server1, server2, db1, db2, False, options) in_both.sort() if not options['no_object_check']: server1_str = "server1." + db1 if server1 == server2: server2_str = "server1." + db2 else: server2_str = "server2." + db2 if len(in_db1) or len(in_db2): if options['run_all_tests']: if len(in_db1) > 0: differs = True print_missing_list(in_db1, server1_str, server2_str) print "#" if len(in_db2) > 0: differs = True print_missing_list(in_db2, server2_str, server1_str) print "#" else: differs = True raise UtilError(_ERROR_OBJECT_LIST.format(db1, db2)) # If in verbose mode, show count of object types. if options['verbosity'] > 1: objects = { 'TABLE': 0, 'VIEW': 0, 'TRIGGER': 0, 'PROCEDURE': 0, 'FUNCTION': 0, 'EVENT': 0, } for item in in_both: obj_type = item[0] objects[obj_type] += 1 print "Looking for object types table, view, trigger, procedure," + \ " function, and event." print "Object types found common to both databases:" for obj in objects: print " {0:>12} : {1}".format(obj, objects[obj]) return (in_both, differs) def _compare_objects(server1, server2, obj1, obj2, reporter, options, object_type): """Compare object definitions and produce difference server1[in] first server Server instance server2[in] second server Server instance obj1[in] first object obj2[in] second object reporter[in] database compare reporter class instance options[in] options dictionary object_type[in] type of the objects to be compared (e.g., TABLE, PROCEDURE, etc.). Returns list of errors """ errors = [] if not options['no_diff']: # For each database, compare objects # temporarily make the diff quiet to retrieve errors new_opt = {} new_opt.update(options) new_opt['quiet'] = True # do not print messages new_opt['suppress_sql'] = True # do not print SQL statements either res = diff_objects(server1, server2, obj1, obj2, new_opt, object_type) if res is not None: reporter.report_state('FAIL') errors.extend(res) if not options['run_all_tests']: raise UtilError(_ERROR_DB_DIFF) else: reporter.report_state('pass') else: reporter.report_state('SKIP') return errors def _check_row_counts(server1, server2, obj1, obj2, reporter, options): """Compare row counts for tables server1[in] first server Server instance server2[in] second server Server instance obj1[in] first object obj2[in] second object reporter[in] database compare reporter class instance options[in] options dictionary Returns list of errors """ errors = [] if not options['no_row_count']: rows1 = server1.exec_query("SELECT COUNT(*) FROM " + obj1) rows2 = server2.exec_query("SELECT COUNT(*) FROM " + obj2) if rows1 != rows2: reporter.report_state('FAIL') msg = _ERROR_ROW_COUNT.format(obj1, obj2) if not options['run_all_tests']: raise UtilError(msg) else: errors.append("# %s" % msg) else: reporter.report_state('pass') else: reporter.report_state('SKIP') return errors def _check_data_consistency(server1, server2, obj1, obj2, reporter, options): """Check data consistency server1[in] first server Server instance server2[in] second server Server instance obj1[in] first object obj2[in] second object reporter[in] database compare reporter class instance options[in] options dictionary Returns list of errors debug_msgs """ direction = options.get('changes-for', 'server1') reverse = options.get('reverse', False) errors = [] debug_msgs = [] # For each table, do row data consistency check if not options['no_data']: reporter.report_state('-') try: # Do the comparison considering the direction. diff_server1, diff_server2 = check_consistency( server1, server2, obj1, obj2, options, diag_msgs=debug_msgs, reporter=reporter) # if no differences, return if (diff_server1 is None and diff_server2 is None) or \ (not reverse and direction == 'server1' and diff_server1 is None) or \ (not reverse and direction == 'server2' and diff_server2 is None): return errors, debug_msgs # Build diff list new_opts = options.copy() new_opts['data_diff'] = True if direction == 'server1': diff_list = build_diff_list(diff_server1, diff_server2, diff_server1, diff_server2, 'server1', 'server2', new_opts) else: diff_list = build_diff_list(diff_server2, diff_server1, diff_server2, diff_server1, 'server2', 'server1', new_opts) if diff_list: errors = diff_list except UtilError, e: if e.errmsg.endswith("not have an usable Index or primary key."): reporter.report_state('SKIP') errors.append("# {0}".format(e.errmsg)) else: reporter.report_state('FAIL') if not options['run_all_tests']: print raise e else: errors.append(e.errmsg) else: reporter.report_state('SKIP') return errors, debug_msgs def _check_option_defaults(options): """Set the defaults for options if they are not set. This prevents users from calling the method and its subordinates with missing options. """ for opt_name in _DEFAULT_OPTIONS: if opt_name not in options: options[opt_name] = _DEFAULT_OPTIONS[opt_name] def database_compare(server1_val, server2_val, db1, db2, options): """Perform a consistency check among two databases This method performs a database consistency check among two databases which ensures the databases exist, the objects match in number and type, the row counts match for all tables, and the data for each matching tables is consistent. If any errors or differences are found, the operation stops and the difference is printed. The following steps are therefore performed: 1) check to make sure the databases exist and are the same definition 2) check to make sure the same objects exist in each database 3) for each object, ensure the object definitions match among the databases 4) for each table, ensure the row counts are the same 5) for each table, ensure the data is the same By default, the operation stops on any failure of any test. The caller can override this behavior by specifying run_all_tests = True in the options dictionary. TODO: allow the user to skip object types (e.g. --skip-triggers, et. al.) server1_val[in] a dictionary containing connection information for the first server including: (user, password, host, port, socket) server2_val[in] a dictionary containing connection information for the second server including: (user, password, host, port, socket) db1[in] the first database in the compare db2[in] the second database in the compare options[in] a dictionary containing the options for the operation: (quiet, verbosity, difftype, run_all_tests) Returns bool True if all object match, False if partial match """ _check_option_defaults(options) # Connect to servers server1, server2 = server_connect(server1_val, server2_val, db1, db2, options) # Check to see if databases exist db1_conn = Database(server1, db1, options) if not db1_conn.exists(): raise UtilDBError(_ERROR_DB_MISSING.format(db1)) db2_conn = Database(server2, db2, options) if not db2_conn.exists(): raise UtilDBError(_ERROR_DB_MISSING.format(db2)) # Print a different message is server2 is not defined if not server2_val: message = "# Checking databases {0} and {1} on server1\n#" else: message = "# Checking databases {0} on server1 and {1} on server2\n#" print(message.format(db1_conn.db_name, db2_conn.db_name)) # Check for database existence and CREATE differences _check_databases(server1, server2, db1_conn.q_db_name, db2_conn.q_db_name, options) # Get common objects and report discrepancies (in_both, differs) = _check_objects(server1, server2, db1, db2, db1_conn, db2_conn, options) success = not differs reporter = _CompareDBReport(options) reporter.print_heading() # Get sql_mode value from servers server1_sql_mode = server1.select_variable("SQL_MODE") server2_sql_mode = server2.select_variable("SQL_MODE") # Remaining operations can occur in a loop one for each object. for item in in_both: error_list = [] debug_msgs = [] # Set the object type obj_type = item[0] q_obj1 = "{0}.{1}".format(quote_with_backticks(db1, server1_sql_mode), quote_with_backticks(item[1][0], server1_sql_mode)) q_obj2 = "{0}.{1}".format(quote_with_backticks(db2, server2_sql_mode), quote_with_backticks(item[1][0], server2_sql_mode)) reporter.report_object(obj_type, item[1][0]) # Check for differences in CREATE errors = _compare_objects(server1, server2, q_obj1, q_obj2, reporter, options, obj_type) error_list.extend(errors) # Check row counts if obj_type == 'TABLE': errors = _check_row_counts(server1, server2, q_obj1, q_obj2, reporter, options) if len(errors) != 0: error_list.extend(errors) else: reporter.report_state("-") # Check data consistency for tables if obj_type == 'TABLE': errors, debug_msgs = _check_data_consistency(server1, server2, q_obj1, q_obj2, reporter, options) if len(errors) != 0: error_list.extend(errors) else: reporter.report_state("-") if options['verbosity'] > 0: print get_create_object(server1, q_obj1, options, obj_type) get_create_object(server2, q_obj2, options, obj_type) if debug_msgs and options['verbosity'] > 2: reporter.report_errors(debug_msgs) reporter.report_errors(error_list) # Fail if errors are found if error_list: success = False return success def compare_all_databases(server1_val, server2_val, exclude_list, options): """Perform a consistency check among all common databases on the servers This method gets all databases from the servers, prints any missing databases and performs a consistency check among all common databases. If any errors or differences are found, the operation will print the difference and continue. This method will return None if no databases to compare. """ success = True # Connect to servers conn_options = { "quiet": options.get("quiet", False), "src_name": "server1", "dest_name": "server2", } server1, server2 = connect_servers(server1_val, server2_val, conn_options) # Check if the specified servers are the same if server2 is None or server1.port == server2.port and \ server1.is_alias(server2.host): raise UtilError( "Specified servers are the same (server1={host1}:{port1} and " "server2={host2}:{port2}). Cannot compare all databases on the " "same server.".format(host1=server1.host, port1=server1.port, host2=getattr(server2, "host", server1.host), port2=getattr(server2, "port", server1.port)) ) # Get all databases, except those used in --exclude get_dbs_query = """ SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME != 'INFORMATION_SCHEMA' AND SCHEMA_NAME != 'PERFORMANCE_SCHEMA' AND SCHEMA_NAME != 'mysql' AND SCHEMA_NAME != 'sys' {0}""" conditions = "" if exclude_list: # Add extra where to exclude databases in exclude_list operator = 'REGEXP' if options['use_regexp'] else 'LIKE' conditions = "AND {0}".format(" AND ".join( ["SCHEMA_NAME NOT {0} '{1}'".format(operator, db) for db in exclude_list]) ) server1_dbs = set( [db[0] for db in server1.exec_query(get_dbs_query.format(conditions))] ) server2_dbs = set( [db[0] for db in server2.exec_query(get_dbs_query.format(conditions))] ) # Check missing databases if options['changes-for'] == 'server1': diff_dbs = server1_dbs.difference(server2_dbs) for db in diff_dbs: msg = _ERROR_DB_MISSING_ON_SERVER.format(db, "server1", "server2") print("# {0}".format(msg)) else: diff_dbs = server2_dbs.difference(server1_dbs) for db in diff_dbs: msg = _ERROR_DB_MISSING_ON_SERVER.format(db, "server2", "server1") print("# {0}".format(msg)) # Compare databases in common common_dbs = server1_dbs.intersection(server2_dbs) if common_dbs: print("# Comparing databases: {0}".format(", ".join(common_dbs))) else: success = None for db in common_dbs: try: res = database_compare(server1_val, server2_val, db, db, options) if not res: success = False print("\n") except UtilError as err: print("ERROR: {0}\n".format(err.errmsg)) success = False return success mysql-utilities-1.6.4/mysql/utilities/command/read_frm.py0000644001577100752670000004417512747670311023334 0ustar pb2usercommon# # Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the command to read a frm file. It requires a .frm filename, and general options (verbosity, etc.). """ import os import re import shutil import subprocess import sys import tempfile import uuid from mysql.utilities.exception import UtilError from mysql.utilities.command import serverclone from mysql.utilities.command.serverclone import user_change_as_root from mysql.utilities.common.frm_reader import FrmReader from mysql.utilities.common.server import Server, stop_running_server from mysql.utilities.common.tools import (requires_encoding, encode, requires_decoding, decode) # The following are storage engines that cannot be read in default mode _CANNOT_READ_ENGINE = ["PARTITION", "PERFORMANCE_SCHEMA"] _SPAWN_SERVER_ERROR = ("Spawn server operation failed{0}. To diagnose, run " "the utility again and use the --verbosity option to " "view the messages from the spawned server and correct " "any errors presented then run the utility again.") def _get_frm_path(dbtablename, datadir, new_db=None): """Form the path and discover the db and name of a frm file. dbtablename[in] the database.table name in the format db:table datadir[in] the path to the data directory new_db[in] a new database name default = None == use existing db Returns tuple - (db, name, path) or raises an error if .frm file cannot be read """ # Form the path to the .frm file. There are two possibilities: # a) the user has specified a full path # b) the user has specified a db:table combination (with/without .frm) path_parts = os.path.split(dbtablename) if ':' in dbtablename and len(path_parts[0]) == 0: # here we use the datadir to form the path path_parts = dbtablename.split(":") db = path_parts[0] table = path_parts[1] if datadir is None: datadir = "" frm_path = os.path.join(datadir, table) elif len(path_parts) == 2 and ":" in path_parts[1]: db, table = path_parts[1].split(":", 1) frm_path = os.path.join(path_parts[0], table) else: # here we decipher the last folder as the database name frm_path = dbtablename db = None if len(path_parts) == 2: path, table = path_parts if path == '': db = None path = None elif len(path_parts) == 1: db = None path = None table = dbtablename if db is None and path: # find database from path folders = path.split(os.path.sep) if len(folders): db = folders[len(folders) - 1] # Check that the frm_path name has .frm. if not frm_path.lower().endswith(".frm"): frm_path += ".frm" # Strip the .frm from table if table.lower().endswith('.frm'): table = os.path.splitext(table)[0] if not os.access(frm_path, os.R_OK): raise UtilError("Cannot read .frm file from %s." % frm_path) if new_db: db = new_db return (db, table, frm_path) def _spawn_server(options): """Spawn a server to use for reading .frm files This method spawns a new server instance on the port specified by the user in the options dictionary. options[in] Options from user Returns tuple - (Server instance, new datdir) or raises exception on error """ verbosity = int(options.get("verbosity", 0)) quiet = options.get("quiet", False) new_port = options.get("port", 3310) user = options.get("user", None) start_timeout = int(options.get("start_timeout", 10)) # 1) create a directory to use for new datadir # If the user is not the same as the user running the script... if user_change_as_root(options): # Since Python libraries correctly restrict temporary folders to # the user who runs the script and /tmp is protected on some # platforms, we must create the folder in the current folder temp_datadir = os.path.join(os.getcwd(), str(uuid.uuid4())) os.mkdir(temp_datadir) else: temp_datadir = tempfile.mkdtemp() if verbosity > 1 and not quiet: print "# Creating a temporary datadir =", temp_datadir # 2) spawn a server pointed to temp if not quiet: if user: print("# Spawning server with --user={0}.".format(user)) print "# Starting the spawned server on port %s ..." % new_port, sys.stdout.flush() bootstrap_options = { 'new_data': temp_datadir, 'new_port': new_port, 'new_id': 101, 'root_pass': "root", 'mysqld_options': None, 'verbosity': verbosity if verbosity > 1 else 0, 'basedir': options.get("basedir"), 'delete': True, 'quiet': True if verbosity <= 1 else False, 'user': user, 'start_timeout': start_timeout, } if verbosity > 1 and not quiet: print try: serverclone.clone_server(None, bootstrap_options) except UtilError as error: if error.errmsg.startswith("Unable to communicate"): err = ". Clone server error: {0}".format(error.errmsg) proc_id = int(error.errmsg.split("=")[1].strip('.')) print("ERROR Attempting to stop failed spawned server. " " Process id = {0}.".format(proc_id)) if os.name == "posix": try: os.kill(proc_id, subprocess.signal.SIGTERM) except OSError: pass else: try: subprocess.Popen("taskkill /F /T /PID %i" % proc_id, shell=True) except: pass raise UtilError(_SPAWN_SERVER_ERROR.format(err)) else: raise if verbosity > 1 and not quiet: print "# Connecting to spawned server" conn = { "user": "root", "passwd": "root", "host": "127.0.0.1", "port": options.get("port"), } server_options = { 'conn_info': conn, 'role': "frm_reader_bootstrap", } server = Server(server_options) try: server.connect() except UtilError: raise UtilError(_SPAWN_SERVER_ERROR.format("")) if not quiet: print "done." return (server, temp_datadir) def _get_create_statement(server, temp_datadir, frm_file, version, options, quiet=False): """Get the CREATE statement for the .frm file This method attempts to read the CREATE statement by copying the .frm file, altering the storage engine in the .frm file to MEMORY and issuing a SHOW CREATE statement for the table/view. If this method returns None, the operation was successful and the CREATE statement was printed. If a string is returned, there was at least one error (which will be printed) and the .frm file was not readable. The returned frm file path can be used to tell the user to use the diagnostic mode for reading files byte-by-byte. See the method read_frm_files_diagnostic() above. server[in] Server instance temp_datadir[in] New data directory frm_file[in] Tuple containing (db, table, path) for .frm file version[in] Version string for the current server options[in] Options from user Returns string - None on success, path to frm file on error """ verbosity = int(options.get("verbosity", 0)) quiet = options.get("quiet", False) new_engine = options.get("new_engine", None) frm_dir = options.get("frm_dir", ".{0}".format(os.sep)) user = options.get('user', 'root') if not quiet: print "#\n# Reading the %s.frm file." % frm_file[1] try: # 1) copy the file db = frm_file[0] if not db or db == ".": db = "test" db_name = db + "_temp" new_path = os.path.normpath(os.path.join(temp_datadir, db_name)) if not os.path.exists(new_path): os.mkdir(new_path) new_frm = os.path.join(new_path, frm_file[1] + ".frm") # Check name for decoding and decode try: if requires_decoding(frm_file[1]): new_frm_file = decode(frm_file[1]) frm_file = (frm_file[0], new_frm_file, frm_file[2]) shutil.copy(frm_file[2], new_path) # Check name for encoding and encode elif requires_encoding(frm_file[1]): new_frm_file = encode(frm_file[1]) + ".frm" new_frm = os.path.join(new_path, new_frm_file) shutil.copy(frm_file[2], new_frm) else: shutil.copy(frm_file[2], new_path) except: _, e, _ = sys.exc_info() print("ERROR: {0}".format(e)) # Set permissons on copied file if user context in play if user_change_as_root(options): subprocess.call(['chown', '-R', user, new_path]) subprocess.call(['chgrp', '-R', user, new_path]) server.exec_query("CREATE DATABASE IF NOT EXISTS %s" % db_name) frm = FrmReader(db_name, frm_file[1], new_frm, options) frm_type = frm.get_type() server.exec_query("FLUSH TABLES") if frm_type == "TABLE": # 2) change engine if it is a table current_engine = frm.change_storage_engine() # Abort read if restricted engine found if current_engine[1].upper() in _CANNOT_READ_ENGINE: print ("ERROR: Cannot process tables with the %s storage " "engine. Please use the diagnostic mode to read the " "%s file." % (current_engine[1].upper(), frm_file[1])) return frm_file[2] # Check server version server_version = None if version and len(current_engine) > 1 and current_engine[2]: server_version = (int(current_engine[2][0]), int(current_engine[2][1:3]), int(current_engine[2][3:])) if verbosity > 1 and not quiet: print ("# Server version in file: %s.%s.%s" % server_version) if not server.check_version_compat(server_version[0], server_version[1], server_version[2]): versions = (server_version[0], server_version[1], server_version[2], version[0], version[1], version[2]) print ("ERROR: The server version for this " "file is too low. It requires a server version " "%s.%s.%s or higher but your server is version " "%s.%s.%s. Try using a newer server or use " "diagnostic mode." % versions) return frm_file[2] # 3) show CREATE TABLE res = server.exec_query("SHOW CREATE TABLE `%s`.`%s`" % (db_name, frm_file[1])) create_str = res[0][1] if new_engine: create_str = create_str.replace("ENGINE=MEMORY", "ENGINE=%s" % new_engine) elif not current_engine[1].upper() == "MEMORY": create_str = create_str.replace("ENGINE=MEMORY", "ENGINE=%s" % current_engine[1]) if frm_file[0] and not frm_file[0] == ".": create_str = create_str.replace("CREATE TABLE ", "CREATE TABLE `%s`." % frm_file[0]) # if requested, generate the new .frm with the altered engine if new_engine: server.exec_query("ALTER TABLE `{0}`.`{1}` " "ENGINE={2}".format(db_name, frm_file[1], new_engine)) new_frm_file = os.path.join(frm_dir, "{0}.frm".format(frm_file[1])) if os.path.exists(new_frm_file): print("#\n# WARNING: Unable to create new .frm file. " "File exists.") else: try: shutil.copyfile(new_frm, new_frm_file) print("# Copy of .frm file with new storage " "engine saved as {0}.".format(new_frm_file)) except (IOError, OSError, shutil.Error) as e: print("# WARNING: Unable to create new .frm file. " "Error: {0}".format(e)) elif frm_type == "VIEW": # 5) show CREATE VIEW res = server.exec_query("SHOW CREATE VIEW %s.%s" % (db_name, frm_file[1])) create_str = res[0][1] if frm_file[0]: create_str = create_str.replace("CREATE VIEW ", "CREATE VIEW `%s`." % frm_file[0]) # Now we must replace the string for storage engine! print "#\n# CREATE statement for %s:\n#\n" % frm_file[2] print create_str print if frm_type == "TABLE" and options.get("show_stats", False): frm.show_statistics() except: print ("ERROR: Failed to correctly read the .frm file. Please try " "reading the file with the --diagnostic mode.") return frm_file[2] return None def read_frm_files_diagnostic(frm_files, options): """Read a a list of frm files. This method reads a list of .frm files and displays the CREATE TABLE or CREATE VIEW statement for each. This method initiates a byte-by-byte read of the file. frm_files[in] list of the database.table names in the format db:table options[in] options for reading the .frm file """ datadir = options.get("datadir", None) show_stats = options.get("show_stats", False) for frm_file in frm_files: db, table, frm_path = _get_frm_path(frm_file, datadir, None) frm = FrmReader(db, table, frm_path, options) frm.show_create_table_statement() if show_stats: frm.show_statistics() return True def read_frm_files(file_names, options): """Read frm files using a spawned (bootstrapped) server. This method reads the list of frm files by spawning a server then copying the .frm files, changing the storage engine to memory, issuing a SHOW CREATE command, then resetting the storage engine and printing the resulting CREATE statement. file_names[in] List of files to read options[in] Options from user Returns list - list of .frm files that cannot be read. """ test_port = options.get("port", None) test_basedir = options.get("basedir", None) test_server = options.get("server", None) if not test_port or (not test_basedir and not test_server): raise UtilError("Method requires basedir or server and port options.") verbosity = int(options.get("verbosity", 0)) quiet = options.get("quiet", False) datadir = options.get("datadir", None) # 1) for each .frm, determine its type and db, table name if verbosity > 1 and not quiet: print "# Checking read access to .frm files " frm_files = [] for file_name in file_names: db, table, frm_path = _get_frm_path(file_name, datadir) if not os.access(frm_path, os.R_OK): print "ERROR: Unable to read the file %s." % frm_path + \ "You must have read access to the .frm file." frm_files.append((db, table, frm_path)) # 2) Spawn the server server, temp_datadir = _spawn_server(options) version_str = server.get_version() match = re.match(r'^(\d+\.\d+(\.\d+)*).*$', version_str.strip()) if match: version = [int(x) for x in match.group(1).split('.')] version = (version + [0])[:3] # Ensure a 3 elements list else: print ("# WARNING: Error parsing server version %s. Cannot compare " "version of .frm file." % version_str) version = None failed_reads = [] if not quiet: print "# Reading .frm files" try: for frm_file in frm_files: # 3) For each .frm file, get the CREATE statement frm_err = _get_create_statement(server, temp_datadir, frm_file, version, options) if frm_err: failed_reads.append(frm_err) except UtilError as error: raise UtilError(error.errmsg) finally: # 4) shutdown the spawned server if verbosity > 1 and not quiet: print "# Shutting down spawned server" print "# Removing the temporary datadir" if user_change_as_root(options): try: os.unlink(temp_datadir) except OSError: pass # ignore if we cannot delete stop_running_server(server) return failed_reads mysql-utilities-1.6.4/mysql/utilities/command/diskusage.py0000644001577100752670000010362412747670311023527 0ustar pb2usercommon# # Copyright (c) 2011, 2016 Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the reporting mechanisms for reporting disk usage. """ import locale import os import sys from mysql.utilities.exception import UtilError from mysql.utilities.common.format import print_list from mysql.utilities.common.tools import encode from mysql.utilities.common.user import User # Constants _KB = 1024.0 _MB = 1024.0 * _KB _GB = 1024.0 * _MB _TB = 1024.0 * _GB _QUERY_DATAFREE = """ SELECT DISTINCT data_free FROM INFORMATION_SCHEMA.TABLES WHERE UPPER(engine) = 'INNODB' """ _QUERY_DBSIZE = """ SELECT table_schema AS db_name, SUM(data_length + index_length) AS size FROM INFORMATION_SCHEMA.TABLES %s GROUP BY db_name """ def _print_size(prefix, total): """Print size formatted with commas and estimated to the largest XB. prefix[in] The preamble to the size. e.g. "Total XXX =" total[in] Integer value to format. """ msg = "{0}{1} bytes".format(prefix, locale.format("%d", total, grouping=True)) # Calculate largest XByte... if total > _TB: converted = total / _TB print("{0} or {1} TB".format(msg, locale.format("%.2f", converted, grouping=True))) elif total > _GB: converted = total / _GB print("{0} or {1} GB".format(msg, locale.format("%.2f", converted, grouping=True))) elif total > _MB: converted = total / _MB print("{0} or {1} MB".format(msg, locale.format("%.2f", converted, grouping=True))) elif total > _KB: converted = total / _KB print("{0} or {1} KB".format(msg, locale.format("%.2f", converted, grouping=True))) else: print(msg) def _get_formatted_max_width(rows, columns, col): """Return the max width for a numeric column. list[in] The list to search col[in] Column number to search return (int) maximum width of character representation """ width = 0 if rows is None or rows == [] or col >= len(rows[0]): return width for row in rows: size = len(locale.format("%d", row[col], grouping=True)) col_size = len(columns[col]) if size > width: width = size if col_size > width: width = col_size return int(width) def _get_folder_size(folder): """Get size of folder (directory) and all its contents folder[in] Folder to calculate return (int) size of folder or 0 if empty or None if not exists or error """ try: total_size = os.path.getsize(folder) except: return None for item in os.listdir(folder): itempath = os.path.join(folder, item) if os.path.isfile(itempath): total_size += os.path.getsize(itempath) elif os.path.isdir(itempath): total_size += _get_folder_size(itempath) return total_size def _get_db_dir_size(folder): """Calculate total disk space used for a given directory. This method will sum all files in the directory except for the MyISAM files (.myd, .myi). folder[in] The folder to sum returns (int) sum of files in directory or None if not exists """ try: total_size = os.path.getsize(folder) except: return None for item in os.listdir(folder): name, ext = os.path.splitext(item) if ext.upper() not in (".MYD", ".MYI", ".IBD") and \ name.upper() not in ('SLOW_LOG', 'GENERAL_LOG'): itemfolder = os.path.join(folder, item) if os.path.isfile(itemfolder): total_size += os.path.getsize(itemfolder) elif os.path.isdir(itemfolder): total_size += _get_db_dir_size(itemfolder) return total_size def _find_tablespace_files(folder, verbosity=0): """Find all tablespace files located in the datadir. folder[in] The folder to search return (tuple) (tablespaces[], total_size) """ total = 0 tablespaces = [] # skip inaccessible files. try: for item in os.listdir(folder): itempath = os.path.join(folder, item) _, ext = os.path.splitext(item) if os.path.isfile(itempath): _, ext = os.path.splitext(item) if ext.upper() == "IBD": size = os.path.getsize(itempath) total += size if verbosity > 0: row = (item, size, 'file tablespace', '') else: row = (item, size) tablespaces.append(row) else: subdir, tot = _find_tablespace_files(itempath, verbosity) if subdir is not None: total += tot tablespaces.extend(subdir) except: return (None, None) return tablespaces, total def _build_logfile_list(server, log_name, suffix='_file'): """Build a list of all log files based on the system variable by the same name as log_name. server[in] Connected server log_name[in] Name of log (e.g. slow_query_log) suffix[in] Suffix of log variable name (e.g. slow_query_log_file) default = '_file' return (tuple) (logfiles[], path to log files, total size) """ log_path = None res = server.show_server_variable(log_name) if res != [] and res[0][1].upper() == 'OFF': print "# The %s is turned off on the server." % log_name else: res = server.show_server_variable(log_name + suffix) if res == []: raise UtilError("Cannot get %s_file setting." % log_name) log_path = res[0][1] if os.access(log_path, os.R_OK): parts = os.path.split(log_path) if len(parts) <= 1: log_file = log_path else: log_file = parts[1] log_path_size = os.path.getsize(log_path) return (log_file, log_path, int(log_path_size)) return None, 0, 0 def _get_log_information(server, log_name, suffix='_file', is_remote=False): """Get information about a specific log. This method checks the system variable of the log_name passed to see if it is turned on. If turned on, the method returns a list of the log files and the total of the log files. server[in] Connected server log_name[in] Variable name for the log (e.g. slow_query_log) suffix[in] Suffix of log variable name (e.g. slow_query_log_file) default = '_file' is_remote[in] True is a remote server returns (tuple) (log files, total size) """ if is_remote: print("# {0} information not accessible from a remote host." "".format(log_name)) return (None, 0,) res = server.show_server_variable(log_name) if res != [] and res[0][1].upper() == 'OFF': print "# The %s is turned off on the server." % log_name else: log_file, log_path, log_size = _build_logfile_list(server, log_name, suffix) if log_file is None or log_path is None or \ not os.access(log_path, os.R_OK): print "# %s information is not accessible. " % log_name + \ "Check your permissions." return None, 0 return log_file, log_size return None, 0 def _build_log_list(folder, prefix): """Build a list of all binary log files based on the prefix for the name. Return total size of all files found. folder[in] Folder to search prefix[in] Prefix of log name (e.g. mysql-bin) return (tuple) (binlogfiles[], total size) """ total_size = 0 binlogs = [] if prefix is not None: for item in os.listdir(folder): name, _ = os.path.splitext(item) if name.upper() == prefix.upper(): itempath = os.path.join(folder, item) if os.path.isfile(itempath): size = os.path.getsize(itempath) binlogs.append((item, size)) total_size += os.path.getsize(itempath) binlogs.sort() return binlogs, total_size def _build_innodb_list(per_table, folder, datadir, specs, verbosity=0): """Build a list of all InnoDB files. This method builds a list of all InnoDB tablespace files and related files. It will search all database directories if per_table is True. Returns total size of all files found. The verbosity option controls how much data is shown: 0 : no additional information > 0 : include type and specification (for shared tablespaces) per_table[in] If True, look for individual tablespaces folder[in] Folder to search datadir[in] Data directory specs[in] List of specifications verbosity[in] Determines how much information to display return (tuple) (tablespacefiles[], total size) """ total_size = 0 tablespaces = [] # Here, we want to capture log files as well as tablespace files. if specs is not None: for item in os.listdir(folder): name, _ = os.path.splitext(item) # Check specification list for spec in specs: parts = spec.split(":") if len(parts) < 1: break if name.upper() == parts[0].upper(): itempath = os.path.join(folder, item) if os.path.isfile(itempath): size = os.path.getsize(itempath) if verbosity > 0: row = (item, size, 'shared tablespace', spec) else: row = (item, size) tablespaces.append(row) total_size += os.path.getsize(itempath) elif name[0:6].upper() == "IB_LOG": itempath = os.path.join(folder, item) if os.path.isfile(itempath): size = os.path.getsize(itempath) if verbosity > 0: row = (item, size, 'log file', '') else: row = (item, size) if row not in tablespaces: tablespaces.append(row) total_size += os.path.getsize(itempath) # Check to see if innodb_file_per_table is ON if per_table: tablespace_files, total = _find_tablespace_files(datadir, verbosity) tablespaces.extend(tablespace_files) total_size += total tablespaces.sort() return tablespaces, total_size def _build_db_list(server, rows, include_list, datadir, fmt=False, have_read=False, verbosity=0, include_empty=True, is_remote=False): """Build a list of all databases and their totals. This method reads a list of databases and their calculated sizes and builds a new list of the databases searching the datadir provided and adds the size of the miscellaneous files. The size of the database is calculated based on the ability of the user to read the datadir. If user has read access to the datadir, the total returned will be the calculation of the data_length+index_length from INFORMATION_SCHEMA.TABLES plus the sum of all miscellaneous files (e.g. trigger files, .frm files, etc.). If the user does not have read access to the datadir, only the calculated size is returned. If format is True, the columns and rows returned will be formatted to a constant width using locale-specific options for printing numerals. For example, US locale formats 12345 as 12,345. The verbosity option controls how much data is shown: 0 : no additional information > 0 : include data size (calculated) and size of misc files >= 2 : also include database directory actual size server[in] Connected server rows[in] A list of databases and their calculated sizes include_list[in] A list of databases included on the command line datadir[in] The data directory fmt[in] If True, format columns and rows to standard sizes have_read[in] If True, user has read access to datadir path verbosity[in] Controls how much data is shown include_empty[in] Include empty databases in list is_remote[in] True is a remote server return (tuple) (column headers, rows, total size) """ total = 0 results = [] max_col = 0 # build the list for row in rows: # If user can read the datadir, calculate actual and misc file totals if have_read and not is_remote: # Encode database name (with strange characters) to the # corresponding directory name. db_dir = encode(row[0]) dbdir_size = _get_folder_size(os.path.join(datadir, db_dir)) misc_files = _get_db_dir_size(os.path.join(datadir, db_dir)) else: dbdir_size = 0 misc_files = 0 if row[1] is None: data_size = 0 db_total = 0 else: data_size = int(row[1]) db_total = dbdir_size # Count total for all databases total += dbdir_size if have_read and not is_remote: if verbosity >= 2: # get all columns results.append((row[0], dbdir_size, data_size, misc_files, db_total)) elif verbosity > 0: results.append((row[0], data_size, misc_files, db_total)) else: results.append((row[0], db_total)) else: results.append((row[0], db_total)) if have_read and not is_remote and verbosity > 0: num_cols = min(verbosity + 2, 4) else: num_cols = 1 # Build column list and format if necessary col_list = ['db_name'] if num_cols == 4: # get all columns col_list.append('db_dir_size') col_list.append('data_size') col_list.append('misc_files') col_list.append('total') elif num_cols == 3: col_list.append('data_size') col_list.append('misc_files') col_list.append('total') else: col_list.append('total') fmt_cols = [] max_col = [0, 0, 0, 0] if fmt: fmt_cols.append(col_list[0]) for i in range(0, num_cols): max_col[i] = _get_formatted_max_width(results, col_list, i + 1) fmt_cols.append("{0:>{1}}".format(col_list[i + 1], max_col[i])) else: fmt_cols = col_list # format the list if needed fmt_rows = [] if fmt: for row in results: fmt_data = ['', '', '', '', ''] # Put in commas and justify strings for i in range(0, num_cols): fmt_data[i] = locale.format("%d", row[i + 1], grouping=True) if num_cols == 4: # get all columns fmt_rows.append((row[0], fmt_data[0], fmt_data[1], fmt_data[2], fmt_data[3])) elif num_cols == 3: fmt_rows.append((row[0], fmt_data[0], fmt_data[1], fmt_data[2])) else: fmt_rows.append((row[0], fmt_data[0])) else: fmt_rows = results if include_empty: dbs = server.exec_query("SHOW DATABASES") if len(fmt_rows) != len(dbs) - 1: # We have orphaned database - databases not listed in IS.TABLES exclude_list = [] for row in fmt_rows: exclude_list.append(row[0]) for db in dbs: if db[0].upper() != "INFORMATION_SCHEMA" and \ db[0] not in exclude_list and \ (include_list is None or include_list == [] or db[0] in include_list): if fmt: fmt_data = ['', '', '', '', ''] for i in range(0, num_cols): if type(row[i + 1]) == type(int): fmt_data[i] = locale.format("%s", int(row[i + 1]), grouping=True) else: fmt_data[i] = locale.format("%s", row[i + 1], grouping=True) if num_cols == 4: # get all columns fmt_rows.insert(0, (db[0], fmt_data[0], fmt_data[1], fmt_data[2], fmt_data[3])) elif num_cols == 3: fmt_rows.insert(0, (db[0], fmt_data[0], fmt_data[1], fmt_data[2])) else: fmt_rows.insert(0, (db[0], fmt_data[0])) else: if num_cols == 4: fmt_rows.insert(0, (db[0], 0, 0, 0, 0)) elif num_cols == 3: fmt_rows.insert(0, (db[0], 0, 0, 0)) else: fmt_rows.insert(0, (db[0], 0)) return (fmt_cols, fmt_rows, total) def show_database_usage(server, datadir, dblist, options): """Show database usage. Display a list of databases and their disk space usage. The method accepts a list of databases to list or None or [] for all databases. server[in] Connected server to operate against datadir[in] The datadir for the server dblist[in] List of databases options[in] Required options for operation: format, no_headers, verbosity, have_read, include_empty returns True or exception on error """ fmt = options.get("format", "grid") no_headers = options.get("no_headers", False) verbosity = options.get("verbosity", 0) have_read = options.get("have_read", False) is_remote = options.get("is_remote", False) include_empty = options.get("do_empty", True) do_all = options.get("do_all", True) quiet = options.get("quiet", False) if verbosity is None: verbosity = 0 locale.setlocale(locale.LC_ALL, '') # Check to see if we're doing all databases. if len(dblist) > 0: include_list = "(" stop = len(dblist) for i in range(0, stop): include_list += "'%s'" % dblist[i] if i < stop - 1: include_list += ", " include_list += ")" where_clause = "WHERE table_schema IN %s" % include_list where_clause += " AND table_schema != 'INFORMATION_SCHEMA'" else: where_clause = "WHERE table_schema != 'INFORMATION_SCHEMA'" res = server.exec_query(_QUERY_DBSIZE % where_clause) # Get list of databases with sizes and formatted when necessary columns, rows, db_total = _build_db_list(server, res, dblist, datadir, fmt == "grid", have_read, verbosity, include_empty or do_all, is_remote) if not quiet: print "# Database totals:" print_list(sys.stdout, fmt, columns, rows, no_headers) if not quiet: _print_size("\nTotal database disk usage = ", db_total) print return True def show_logfile_usage(server, options): """Show log file disk space usage. Display log file information if logs are turned on. server[in] Connected server to operate against datadir[in] The datadir for the server options[in] Required options for operation: format, no_headers return True or raise exception on error """ fmt = options.get("format", "grid") no_headers = options.get("no_headers", False) is_remote = options.get("is_remote", False) quiet = options.get("quiet", False) if not quiet: print "# Log information." total = 0 _LOG_NAMES = [ ('general_log', '_file'), ('slow_query_log', '_file'), ('log_error', '') ] logs = [] for log_name in _LOG_NAMES: (log, size,) = _get_log_information(server, log_name[0], log_name[1], is_remote) if log is not None: logs.append((log, size)) total += size fmt_logs = [] columns = ['log_name', 'size'] if len(logs) > 0: if fmt == 'grid': max_col = _get_formatted_max_width(logs, columns, 1) if max_col < len('size'): max_col = len('size') size = "{0:>{1}}".format('size', max_col) columns = ['log_name', size] for row in logs: # Add commas size = locale.format("%d", row[1], grouping=True) # Make justified strings size = "{0:>{1}}".format(size, max_col) fmt_logs.append((row[0], size)) else: fmt_logs = logs print_list(sys.stdout, fmt, columns, fmt_logs, no_headers) if not quiet: _print_size("\nTotal size of logs = ", total) print return True def _print_logs(logs, total, options): """Display list of log files. logs[in] List of log rows; total[in] Total logs size; options[in] Dictionary with the options used to print the log files, namely: format, no_headers and quiet. """ out_format = options.get("format", "grid") no_headers = options.get("no_headers", False) log_type = options.get("log_type", "binary log") quiet = options.get("quiet", False) columns = ['log_file'] fmt_logs = [] if out_format == 'GRID': max_col = _get_formatted_max_width(logs, ('log_file', 'size'), 1) if max_col < len('size'): max_col = len('size') size = "{0:>{1}}".format('size', max_col) columns.append(size) for row in logs: # Add commas size = locale.format("%d", row[1], grouping=True) # Make justified strings size = "{0:>{1}}".format(size, max_col) fmt_logs.append((row[0], size)) else: fmt_logs = logs columns.append('size') print_list(sys.stdout, out_format, columns, fmt_logs, no_headers) if not quiet: _print_size("\nTotal size of {0}s = ".format(log_type), total) print def show_log_usage(server, datadir, options): """Show binary or relay log disk space usage. Display binary log file information if binlog turned on if log_type = 'binary log' (default) or show relay log file information is server is a slave and relay log is engaged. server[in] Connected server to operate against datadir[in] The datadir for the server options[in] Required options for operation: format, no_headers. log_type return True or raise exception on error """ log_type = options.get("log_type", "binary log") have_read = options.get("have_read", False) is_remote = options.get("is_remote", False) quiet = options.get("quiet", False) # Check privileges to execute required queries: SUPER or REPLICATION CLIENT user_inst = User(server, "{0}@{1}".format(server.user, server.host)) has_super = user_inst.has_privilege("*", "*", "SUPER") has_rpl_client = user_inst.has_privilege("*", "*", "REPLICATION CLIENT") # Verify necessary permissions (access to filesystem) and privileges # (execute queries) to get logs usage information. if log_type == 'binary log': # Check for binlog ON first. res = server.show_server_variable('log_bin') if res and res[0][1].upper() == 'OFF': print("# Binary logging is turned off on the server.") return True # Check required privileges according to the access to the datadir. if not is_remote and have_read: # Requires SUPER or REPLICATION CLIENT to execute: # SHOW MASTER STATUS. if not has_super and not has_rpl_client: print("# {0} information not accessible. User must have the " "SUPER or REPLICATION CLIENT " "privilege.".format(log_type.capitalize())) return True else: # Requires SUPER for server < 5.6.6 or also REPLICATION CLIENT for # server >= 5.6.6 to execute: SHOW BINARY LOGS. if (server.check_version_compat(5, 6, 6) and not has_super and not has_rpl_client): print("# {0} information not accessible. User must have the " "SUPER or REPLICATION CLIENT " "privilege.".format(log_type.capitalize())) return True elif not has_super: print("# {0} information not accessible. User must have the " "SUPER " "privilege.".format(log_type.capitalize())) return True else: # relay log # Requires SUPER or REPLICATION CLIENT to execute SHOW SLAVE STATUS. if not has_super and not has_rpl_client: print("# {0} information not accessible. User must have the " "SUPER or REPLICATION CLIENT " "privilege.".format(log_type.capitalize())) return True # Can only retrieve usage information from the localhost filesystem. if is_remote: print("# {0} information not accessible from a remote host." "".format(log_type.capitalize())) return True elif not have_read: print("# {0} information not accessible. Check your permissions " "to {1}.".format(log_type.capitalize(), datadir)) return True # Check server status and availability of specified log file type. if log_type == 'binary log': try: res = server.exec_query("SHOW MASTER STATUS") if res: current_log = res[0][0] else: print("# Cannot access files - no binary log information") return True except: raise UtilError("Cannot get {0} information.".format(log_type)) else: try: res = server.exec_query("SHOW SLAVE STATUS") if res: current_log = res[0][7] else: print("# Server is not an active slave - no relay log " "information.") return True except: raise UtilError("Cannot get {0} information.".format(log_type)) # Enough permissions and privileges, get the usage information. if not quiet: print("# {0} information:".format(log_type.capitalize())) print("Current {0} file = {1}".format(log_type, current_log)) if log_type == 'binary log' and (is_remote or not have_read): # Retrieve binlog usage info from SHOW BINARY LOGS. try: logs = server.exec_query("SHOW BINARY LOGS") if logs: # Calculate total size. total = sum([int(item[1]) for item in logs]) else: print("# No binary logs data available.") return True except: raise UtilError("Cannot get {0} information.".format(log_type)) else: # Retrieve usage info from localhost filesystem. # Note: as of 5.6.2, users can specify location of binlog and relaylog. if server.check_version_compat(5, 6, 2): if log_type == 'binary log': res = server.show_server_variable("log_bin_basename")[0] else: res = server.show_server_variable("relay_log_basename")[0] log_path, log_prefix = os.path.split(res[1]) # In case log_path and log_prefix are '' (not defined) set them # to the default value. if not log_path: log_path = datadir if not log_prefix: log_prefix = os.path.splitext(current_log)[0] else: log_path = datadir log_prefix = os.path.splitext(current_log)[0] logs, total = _build_log_list(log_path, log_prefix) if not logs: raise UtilError("The {0}s are missing.".format(log_type)) # Print logs usage information. _print_logs(logs, total, options) return True def show_innodb_usage(server, datadir, options): """Show InnoDB tablespace disk space usage. Display InnoDB tablespace information if InnoDB turned on. server[in] Connected server to operate against datadir[in] The datadir for the server options[in] Required options for operation: format, no_headers return True or raise exception on error """ fmt = options.get("format", "grid") no_headers = options.get("no_headers", False) is_remote = options.get("is_remote", False) verbosity = options.get("verbosity", 0) quiet = options.get("quiet", False) # Check to see if we have innodb res = server.show_server_variable('have_innodb') if res != [] and res[0][1].upper() in ("NO", "DISABLED"): print "# InnoDB is disabled on this server." return True # Modified check for version 5.5 res = server.exec_query("USE INFORMATION_SCHEMA") res = server.exec_query("SELECT engine, support " "FROM INFORMATION_SCHEMA.ENGINES " "WHERE engine='InnoDB'") if res != [] and res[0][1].upper() == "NO": print "# InnoDB is disabled on this server." return True # Check to see if innodb_file_per_table is ON res = server.show_server_variable('innodb_file_per_table') if res != [] and res[0][1].upper() == "ON": innodb_file_per_table = True else: innodb_file_per_table = False # Get path res = server.show_server_variable('innodb_data_home_dir') if res != [] and len(res[0][1]) > 0: innodb_dir = res[0][1] else: innodb_dir = datadir if not is_remote and os.access(innodb_dir, os.R_OK): if not quiet: print "# InnoDB tablespace information:" res = server.show_server_variable('innodb_data_file_path') tablespaces = [] if res != [] and len(res[0][1]) > 0: parts = res[0][1].split(";") for part in parts: tablespaces.append(part) innodb, total = _build_innodb_list(innodb_file_per_table, innodb_dir, datadir, tablespaces, verbosity) if innodb == []: raise UtilError("InnoDB is enabled but there is a problem " "reading the tablespace files.") columns = ['innodb_file', 'size'] if verbosity > 0: columns.append('type') columns.append('specificaton') size = 'size' fmt_innodb = [] if fmt.upper() == 'GRID': max_col = _get_formatted_max_width(innodb, columns, 1) if max_col < len('size'): max_col = len('size') size = "{0:>{1}}".format('size', max_col) columns = ['innodb_file'] columns.append(size) if verbosity > 0: columns.append('type') columns.append('specificaton') for row in innodb: # Add commas size = locale.format("%d", row[1], grouping=True) # Make justified strings size = "{0:>{1}}".format(size, max_col) if verbosity > 0: fmt_innodb.append((row[0], size, row[2], row[3])) else: fmt_innodb.append((row[0], size)) else: fmt_innodb = innodb print_list(sys.stdout, fmt, columns, fmt_innodb, no_headers) if not quiet: _print_size("\nTotal size of InnoDB files = ", total) print if verbosity > 0 and not innodb_file_per_table and not quiet: for tablespace in innodb: if tablespace[1] != 'log file': parts = tablespace[3].split(":") if len(parts) > 2: size = int(tablespace[1]) / _MB print "Tablespace %s can be " % tablespace[3] + \ "extended by using %s:%sM[...]\n" % \ (parts[0], size) elif is_remote: print("# InnoDB data information not accessible from a remote host.") else: print "# InnoDB data file information is not accessible. " + \ "Check your permissions." if not innodb_file_per_table: res = server.exec_query(_QUERY_DATAFREE) if res != []: if len(res) > 1: raise UtilError("Found multiple rows for freespace.") else: size = int(res[0][0]) if not quiet: _print_size("InnoDB freespace = ", size) print return True mysql-utilities-1.6.4/mysql/utilities/command/utilitiesconsole.py0000644001577100752670000003521512747670311025146 0ustar pb2usercommon# # Copyright (c) 2011, 2015, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the utilities console mechanism. """ import os import tempfile import subprocess from mysql.utilities.exception import UtilError from mysql.utilities.common.console import Console from mysql.utilities.common.format import print_dictionary_list from mysql.utilities.common.utilities import Utilities, get_util_path # The following are additional base commands for the console. These comamnds # are in addition to the supported base commands in the Console class. Thus, # these are specific to mysqluc. # # The list includes a tuple for each command that contains the name of the # command, alias (if available) and its help text. _NEW_BASE_COMMANDS = [ {'name': 'help utilities', 'alias': '', 'text': 'Display list of all utilities supported.'}, {'name': 'help ', 'alias': '', 'text': 'Display help for a specific utility.'}, {'name': 'show errors', 'alias': '', 'text': 'Display errors captured during the execution of the utilities.'}, {'name': 'clear errors', 'alias': '', 'text': 'clear captured errors.'}, {'name': 'show last error', 'alias': '', 'text': 'Display the last error captured during the execution of the' ' utilities'} ] _UTILS_MISSING = "MySQL Utilities are either not installed or " + \ "are not accessible from this terminal." class UtilitiesConsole(Console): """ The UtilitiesConsole class creates a console for running MySQL Utilities. This class uses the Console class to encapsulate the screen handling and key captures for a command line shell. This subclass provides the custom commands (the utilities) to the console class for redirecting to the methods contained in this class for executing utilities. These include: - matching command from the shell to available utilities - matching options from the shell to the options for a given utility - showing the help for a utility """ def __init__(self, options=None): """Constructor options[in] Array of options for controlling what is included and how operations perform (e.g., verbose) """ if options is None: options = {} Console.__init__(self, _NEW_BASE_COMMANDS, options) try: self.path = get_util_path(options.get("utildir", "")) if self.path is None: raise except: raise UtilError(_UTILS_MISSING) self.utils = Utilities(options) self.errors = [] if self.quiet: self.f_out = tempfile.NamedTemporaryFile(delete=False) print("Quiet mode, saving output to {0}".format(self.f_out.name)) else: self.f_out = None def show_custom_command_help(self, arg): """Display the help for a utility This method will display a list of the available utilities if the command argument is 'utilities' or the help for a specific utility if the command argument is the name of a known utility. arg[in] Help command argument """ if self.quiet: return if arg and arg.lower() == 'utilities': self.utils.show_utilities() else: matches = self.utils.get_util_matches(arg) if len(matches) > 1: self.utils.show_utilities(matches) elif len(matches) == 1: self.show_utility_help(matches) else: print("\n\nCannot find utility '{0}'.\n".format(arg)) def do_custom_tab(self, prefix): """Do custom tab key processing This method performs the tab completion for a utility name. It searches the available utilties for the prefix of the utility name. If an exact match is found, it updates the command else it returns a list of matches. If the user has pressed TAB twice, it will display a list of all of the utilities available. prefix[in] Prefix of the utility name """ new_cmd = '' # blank string means no matches find_cmd = prefix if len(prefix) >= 5 and prefix[0:5] != 'mysql': find_cmd = 'mysql' + find_cmd matches = self.utils.get_util_matches(find_cmd) if self.tab_count == 2: self.utils.show_utilities(matches) self.cmd_line.display_command() self.tab_count = 0 # Do command completion here elif len(matches) == 1: new_cmd = matches[0]['name'] + ' ' start = len(prefix) if prefix[0:5] != 'mysql': start += 5 self.cmd_line.add(new_cmd[start:]) self.tab_count = 0 def do_custom_option_tab(self, command_text): """Do custom option tab key processing This method performs the tab completion for the options for a utility. It splits the command text into the utility name and requests the option prefix from the command line. If the user presses TAB twice, the method will display all of the options for the specified utility. command_text[in] Portion of command from the position of the cursor Returns string - '' if not found, the remaining portion of the match of the option if found (for example, we look for '--verb' and find '--verbose' so we return 'ose'). """ option_loc = 0 option = '' cmd_len = len(command_text) full_command = self.cmd_line.get_command() # find utility name i = full_command.find(' ') if i < 0: return # This may be an error! util_name = full_command[0:i] # get utility information utils = self.utils.get_util_matches(util_name) if len(utils) <= 0: return '' # No option found because util does not exist. # if double tab with no option specified, show all options if cmd_len == 0 and self.tab_count == 2: self.utils.show_options(utils[0]) self.cmd_line.display_command() self.tab_count = 0 return find_alias = False if len(command_text) == 0: option_loc = 0 # check for - or -- elif command_text[0:2] == '--': option_loc = 2 elif command_text[0] == '-': option_loc = 1 find_alias = True option = command_text[option_loc:] matches = self.utils.get_option_matches(utils[0], option, find_alias) if self.tab_count == 2: if len(matches) > 0: self.utils.show_options(matches) self.cmd_line.display_command() self.tab_count = 0 return # Do option completion here if len(matches) == 1: if not find_alias: opt_name = matches[0]['name'] # Check for required value if matches[0]['req_value']: opt_name += '=' else: opt_name += ' ' else: # using alias opt_name = matches[0]['alias'] + ' ' # Now, replace the old value on the command line. start = len(command_text) - option_loc self.cmd_line.add(opt_name[start:].strip(' ')) self.tab_count = 0 def show_utility_help(self, utils): """Display help for a utility. utils[in] The utility name. """ if self.quiet: return options = self.utils.get_options_dictionary(utils[0]) print("\n{0}\n".format(utils[0]['usage'])) print("{0} - {1}\n".format(utils[0]['name'], utils[0]['description'])) print("Options:") print_dictionary_list(['Option', 'Description'], ['long_name', 'description'], options, self.width, False) print def is_valid_custom_command(self, command_text): # pylint: disable=W0221 """Validate the custom command If the command_text is the name of a utility supported, return True else return False. command_text[in] Command from the user Returns bool - True - valid, False - invalid """ parts = command_text.split(' ') matches = self.utils.get_util_matches(parts[0]) return len(matches) >= 1 def execute_custom_command(self, command, parameters): """Execute the utility This method executes the utility with the parameters specified by the user. All output is displayed and control returns to the console class. command[in] Name of the utility to execute parameters[in] All options and parameters specified by the user """ if not command.lower().startswith('mysql'): command = 'mysql' + command # look in to the collected utilities for util_info in self.utils.util_list: if util_info["name"] == command: # Get the command used to obtain the help from the utility cmd = list(util_info["cmd"]) cmd.extend(parameters) # Add quotes for Windows if (os.name == "nt"): # If there is a space in the command, quote it! if (" " in cmd[0]): cmd[0] = '"{0}"'.format(cmd[0]) # if cmd is freeze code utility, subprocess just need the # executable part not absolute path using shell=False or # Windows will complain about the path. The base path of # mysqluc is used as location dir base of the subprocess. if '.exe' in cmd[0]: _, ut_cmd = os.path.split(cmd[0]) cmd[0] = ut_cmd.replace('"', '') # If the second part has .py in it and spaces, quote it! if len(cmd) > 1 and (" " in cmd[1]) and ('.py' in cmd[0]): cmd[1] = '"{0}"'.format(cmd[1]) if self.quiet: proc = subprocess.Popen(cmd, shell=False, stdout=self.f_out, stderr=self.f_out) else: proc = subprocess.Popen(cmd, shell=False, stderr=subprocess.PIPE) print # check the output for errors _, stderr_temp = proc.communicate() return_code = proc.returncode err_msg = ("\nThe console has detected that the utility '{0}' " "ended with an error code.\nYou can get more " "information about the error by running the console" " command 'show last error'.").format(command) if not self.quiet and return_code and stderr_temp: print(err_msg) if parameters: msg = ("\nExecution of utility: '{0} {1}' ended with " "return code '{2}' and with the following " "error message:\n" "{3}").format(command, ' '.join(parameters), return_code, stderr_temp) else: msg = ("\nExecution of utility: '{0}' ended with " "return code '{1}' and with the following " "error message:\n{2}").format(command, return_code, stderr_temp) self.errors.append(msg) elif not self.quiet and return_code: if parameters: msg = ("\nExecution of utility: '{0} {1}' ended with " "return code '{2}' but no error message was " "streamed to the standard error, please review " "the output from its execution." "").format(command, ' '.join(parameters), return_code) else: msg = ("\nExecution of utility: '{0}' ended with " "return code '{1}' but no error message was " "streamed to the standard error, please review " "the output from its execution." "").format(command, return_code) print(msg) return # if got here, is because the utility was not found. raise UtilError("The utility {0} is not accessible (from the path: " "{1}).".format(command, self.path)) def show_custom_options(self): """Show all of the options for the mysqluc utility. This method reads all of the options specified when mysqluc was launched and displays them to the user. If none were specified, a message is displayed instead. """ if self.quiet: return if len(self.options) == 0: print("\n\nNo options specified.\n") return # Build a new list that normalizes the options as a dictionary dictionary_list = [] for key in self.options.keys(): # Skip variables list and messages if key not in ['variables', 'welcome', 'goodbye']: value = self.options.get(key, '') item = { 'name': key, 'value': value } dictionary_list.append(item) print print print_dictionary_list(['Option', 'Value'], ['name', 'value'], dictionary_list, self.width) print mysql-utilities-1.6.4/mysql/utilities/command/audit_log.py0000644001577100752670000003272212747670311023517 0ustar pb2usercommon# # Copyright (c) 2012, 2015 Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains features to examine an audit log file, including searching and displaying the results. """ import sys from shutil import copy from mysql.utilities.exception import UtilError from mysql.utilities.common.audit_log_parser import AuditLogParser from mysql.utilities.common.format import convert_dictionary_list, print_list from mysql.utilities.common.server import Server from mysql.utilities.common.tools import show_file_statistics, remote_copy _PRINT_WIDTH = 75 _VALID_COMMAND_OPTIONS = { 'policies': ("ALL", "NONE", "LOGINS", "QUERIES", "DEFAULT"), 'sizes': (0, 4294967295) } _COMMANDS_WITH_OPTIONS = ['POLICY', 'ROTATE_ON_SIZE'] _COMMANDS_WITH_SERVER_OPT = ['POLICY', 'ROTATE_ON_SIZE', 'ROTATE'] VALID_COMMANDS_TEXT = """ Available Commands: copy - copy the audit log to a locally accessible path policy - set the audit log policy Values = {policies} rotate - perform audit log rotation rotate_on_size - set the rotate log size limit for auto rotation Values = {sizes} """.format(policies=', '.join(_VALID_COMMAND_OPTIONS['policies']), sizes=', '.join([str(v) for v in _VALID_COMMAND_OPTIONS['sizes']])) VALID_COMMANDS = ["COPY", "POLICY", "ROTATE", "ROTATE_ON_SIZE"] EVENT_TYPES = ["Audit", "Binlog Dump", "Change user", "Close stmt", "Connect Out", "Connect", "Create DB", "Daemon", "Debug", "Delayed insert", "Drop DB", "Execute", "Fetch", "Field List", "Init DB", "Kill", "Long Data", "NoAudit", "Ping", "Prepare", "Processlist", "Query", "Quit", "Refresh", "Register Slave", "Reset stmt", "Set option", "Shutdown", "Sleep", "Statistics", "Table Dump", "Time"] QUERY_TYPES = ["CREATE", "ALTER", "DROP", "TRUNCATE", "RENAME", "GRANT", "REVOKE", "SELECT", "INSERT", "UPDATE", "DELETE", "COMMIT", "SHOW", "SET", "CALL", "PREPARE", "EXECUTE", "DEALLOCATE"] def command_requires_log_name(command): """Check if the specified command requires the --audit-log-name option. command[in] command to be checked """ return command == "COPY" def command_requires_server(command): """Check if the specified command requires the --server option. command[in] command to be checked. """ return command in _COMMANDS_WITH_SERVER_OPT def command_requires_value(command): """Check the specified command requires an option (i.e. --value). command[in] command to be checked. """ return command in _COMMANDS_WITH_OPTIONS def check_command_value(command, value): """Check if the value is valid for the given command. command[in] command to which the value is concerned. value[in] value to check for the given command. """ if command in _COMMANDS_WITH_OPTIONS: # do range values if command == "ROTATE_ON_SIZE": values = _VALID_COMMAND_OPTIONS['sizes'] try: int_value = int(value) except ValueError: print "Invalid integer value: %s" % value return False if int_value < values[0] or int_value > values[1]: print "The %s command requires values in the range (%s, %s)." \ % (command, values[0], values[1]) return False elif value.upper() not in _VALID_COMMAND_OPTIONS['policies']: print "The %s command requires one of the following " % command + \ "values: %s." % ', '.join(_VALID_COMMAND_OPTIONS['policies']) return False return True class AuditLog(object): """ Class to manage and parse the audit log. The AuditLog class is used to manage and retrieve information of the audit log. It allows the execution of commands to change audit log settings, display control variables, copy and parse audit log files. """ def __init__(self, options): """Constructor options[in] dictionary of options to include width, verbosity, pedantic, quiet """ self.options = options self.log = None def open_log(self): """ Create an AuditLogParser and open the audit file. """ self.log = AuditLogParser(self.options) self.log.open_log() def close_log(self): """Close the previously opened audit log file. """ self.log.close_log() def parse_log(self): """ Parse the audit log file (previously opened), applying search/filtering criterion. """ self.log.parse_log() def output_formatted_log(self): """Output the parsed log entries according to the specified format. Print the entries resulting from the parsing process to the standard output in the specified format. If no entries are found (i.e., none match the defined search criterion) a notification message is print. """ log_rows = self.log.retrieve_rows() if log_rows: out_format = self.options.get("format", "GRID") if out_format == 'raw': for row in log_rows: sys.stdout.write(row) else: # Convert the results to the appropriate format cols, rows = convert_dictionary_list(log_rows) # Note: No need to sort rows, retrieved with the same order # as read (i.e., sorted by timestamp) print_list(sys.stdout, out_format, cols, rows) else: # Print message notifying that no entry was found no_entry_msg = "#\n# No entry found!\n#" print no_entry_msg def check_audit_log(self): """Verify if the audit log plugin is installed on the server. Return the message error if not, or None. """ error = None server = Server({'conn_info': self.options.get("server_vals", None)}) server.connect() # Check to see if the plug-in is installed if not server.supports_plugin("audit"): error = "The audit log plug-in is not installed on this " + \ "server or is not enabled." server.disconnect() return error def show_statistics(self): """Display statistical information about audit log including: - size, date, etc. - Audit log entries """ out_format = self.options.get("format", "GRID") log_name = self.options.get("log_name", None) # Print file statistics: print "#\n# Audit Log File Statistics:\n#" show_file_statistics(log_name, False, out_format) # Print audit log 'AUDIT' entries print "\n#\n# Audit Log Startup Entries:\n#\n" cols, rows = convert_dictionary_list(self.log.header_rows) # Note: No need to sort rows, retrieved with the same order # as read (i.e., sorted by timestamp) print_list(sys.stdout, out_format, cols, rows) def show_options(self): """ Show all audit log variables. """ server = Server({'conn_info': self.options.get("server_vals", None)}) server.connect() rows = server.show_server_variable("audit%") server.disconnect() if rows: print "#\n# Audit Log Variables and Options\n#" print_list(sys.stdout, "GRID", ['Variable_name', 'Value'], rows) print else: raise UtilError("No audit log variables found.") def _copy_log(self): """ Copy the audit log to a local destionation or from a remote server. """ # Need to see if this is a local copy or not. rlogin = self.options.get("rlogin", None) log_name = self.options.get("log_name", None) copy_location = self.options.get("copy_location", None) if not rlogin: copy(log_name, copy_location) else: user, host = rlogin.split(":", 1) remote_copy(log_name, user, host, copy_location, self.options.get("verbosity", 0)) @staticmethod def _rotate_log(server): """Rotate the log. To rotate the log, first discover the value of rotate_on_size then set rotate_on_size to the minimum allowed value (i.e. 4096) and force rotation with a manual flush. Note: the rotation will only effectively occur if the audit log file size is greater than 4096. """ # Get the current rotation size rotate_size = server.show_server_variable( "audit_log_rotate_on_size" )[0][1] min_rotation_size = 4096 # If needed, set rotation size to the minimum allowed value. if int(rotate_size) != min_rotation_size: server.exec_query( "SET @@GLOBAL.audit_log_rotate_on_size = " "{0}".format(min_rotation_size) ) # If needed, restore the rotation size to what it was initially. if int(rotate_size) != min_rotation_size: server.exec_query( "SET @@GLOBAL.audit_log_rotate_on_size = " "{0}".format(rotate_size) ) @staticmethod def _change_policy(server, policy_value): """ Change the audit log plugin policy. This method changes the audit log policy by setting the appropriate variables according to the MySQL server version. Note: For recent MySQL server versions (i.e. >= 5.6.20, and >= 5.7.5) the audit_log_policy is readonly and cannot be changed at runtime (only when starting the server). For those versions, the policy results from the combination of the values set for the 'audit_log_connection_policy' and 'audit_log_statement_policy' variables (not available in previous versions). server[in] Instance of the server. policy_value[in] Policy value to set, supported values: 'ALL', 'NONE', 'LOGINS', 'QUERIES', 'DEFAULT'. """ # Check server version to set appropriate variables. if ((server.check_version_compat(5, 6, 20) and # >= 5.6.20 and < 5.7 not server.check_version_compat(5, 7, 0)) or server.check_version_compat(5, 7, 5)): # >= 5.7.5 # Set the audit_log_connection_policy and # audit_log_statement_policy to yield the chosen policy. policy = policy_value.upper() set_connection_policy = ( "SET @@GLOBAL.audit_log_connection_policy = {0}" ) set_statement_policy = ( "SET @@GLOBAL.audit_log_statement_policy = {0}" ) if policy == 'QUERIES': server.exec_query(set_connection_policy.format('NONE')) server.exec_query(set_statement_policy.format('ALL')) elif policy == 'LOGINS': server.exec_query(set_connection_policy.format('ALL')) server.exec_query(set_statement_policy.format('NONE')) else: server.exec_query(set_connection_policy.format(policy_value)) server.exec_query(set_statement_policy.format(policy_value)) else: # Set the audit_log_policy for older server versions. server.exec_query("SET @@GLOBAL.audit_log_policy = " "{0}".format(policy_value)) def do_command(self): """ Check and execute the audit log command (previously set by the the options of the object constructor). """ # Check for valid command command = self.options.get("command", None) if command not in VALID_COMMANDS: raise UtilError("Invalid command.") command_value = self.options.get("value", None) # Check for valid value if needed if (command_requires_value(command) and not check_command_value(command, command_value)): raise UtilError("Please provide the correct value for the %s " "command." % command) # Copy command does not need the server if command == "COPY": self._copy_log() return True # Connect to server server = Server({'conn_info': self.options.get("server_vals", None)}) server.connect() # Now execute the command print "#\n# Executing %s command.\n#\n" % command try: if command == "POLICY": self._change_policy(server, command_value) elif command == "ROTATE": self._rotate_log(server) else: # "ROTATE_ON_SIZE": server.exec_query("SET @@GLOBAL.audit_log_rotate_on_size = %s" % command_value) finally: server.disconnect() return True mysql-utilities-1.6.4/mysql/utilities/command/diff.py0000644001577100752670000001606512747670311022462 0ustar pb2usercommon# # Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the diff commands for finding the difference among the definitions of two databases. """ import re from mysql.utilities.exception import UtilDBError from mysql.utilities.common.pattern_matching import parse_object_name from mysql.utilities.common.database import Database from mysql.utilities.common.dbcompare import (diff_objects, get_common_objects, server_connect) from mysql.utilities.common.sql_transform import (is_quoted_with_backticks, quote_with_backticks) def object_diff(server1_val, server2_val, object1, object2, options, object_type=None): """diff the definition of two objects Find the difference among two object definitions. server1_val[in] a dictionary containing connection information for the first server including: (user, password, host, port, socket) server2_val[in] a dictionary containing connection information for the second server including: (user, password, host, port, socket) object1[in] the first object in the compare in the form: (db.name) object2[in] the second object in the compare in the form: (db.name) options[in] a dictionary containing the options for the operation: (quiet, verbosity, difftype) object_type[in] type of the objects to be compared (e.g., TABLE, PROCEDURE, etc.). By default None (not defined). Returns None = objects are the same, diff[] = tables differ """ server1, server2 = server_connect(server1_val, server2_val, object1, object2, options) # Get the object type if unknown considering that objects of different # types can be found with the same name. if not object_type: # Get object types of object1 sql_mode = server1.select_variable("SQL_MODE") db_name, obj_name = parse_object_name(object1, sql_mode) db = Database(server1, db_name, options) obj1_types = db.get_object_type(obj_name) if not obj1_types: raise UtilDBError("The object {0} does not exist.".format(object1)) # Get object types of object2 sql_mode = server2.select_variable("SQL_MODE") db_name, obj_name = parse_object_name(object2, sql_mode) db = Database(server2, db_name, options) obj2_types = db.get_object_type(obj_name) if not obj2_types: raise UtilDBError("The object {0} does not exist.".format(object2)) # Merge types found for both objects obj_types = set(obj1_types + obj2_types) # Diff objects considering all types found result = [] for obj_type in obj_types: res = diff_objects(server1, server2, object1, object2, options, obj_type) if res: result.append(res) return result if len(result) > 0 else None else: # Diff objects of known type return diff_objects(server1, server2, object1, object2, options, object_type) def database_diff(server1_val, server2_val, db1, db2, options): """Find differences among objects from two databases. This method compares the object definitions among two databases. If any differences are found, the differences are printed in the format chosen and the method returns False. A True result is returned only when all object definitions match. The method will stop and return False on the first difference found unless the option force is set to True (default = False). server1_val[in] a dictionary containing connection information for the first server including: (user, password, host, port, socket) server2_val[in] a dictionary containing connection information for the second server including: (user, password, host, port, socket) db1[in] the first database in the compare db2[in] the second database in the compare options[in] a dictionary containing the options for the operation: (quiet, verbosity, difftype, force) Returns bool True if all object match, False if partial match """ force = options.get("force", False) server1, server2 = server_connect(server1_val, server2_val, db1, db2, options) in_both, in_db1, in_db2 = get_common_objects(server1, server2, db1, db2, True, options) in_both.sort() if (len(in_db1) > 0 or len(in_db2) > 0) and not force: return False # Get sql_mode value set on servers server1_sql_mode = server1.select_variable("SQL_MODE") server2_sql_mode = server2.select_variable("SQL_MODE") # Quote database names with backticks. q_db1 = db1 if is_quoted_with_backticks(db1, server1_sql_mode) \ else quote_with_backticks(db1, server1_sql_mode) q_db2 = db2 if is_quoted_with_backticks(db2, server2_sql_mode) \ else quote_with_backticks(db2, server2_sql_mode) # Do the diff for the databases themselves result = object_diff(server1, server2, q_db1, q_db2, options, 'DATABASE') if result is not None: success = False if not force: return False # For each that match, do object diff success = True for item in in_both: # Quote object name with backticks with sql_mode from server1 q_obj_name1 = item[1][0] if \ is_quoted_with_backticks(item[1][0], server1_sql_mode) \ else quote_with_backticks(item[1][0], server1_sql_mode) # Quote object name with backticks with sql_mode from server2 q_obj_name2 = item[1][0] if \ is_quoted_with_backticks(item[1][0], server2_sql_mode) \ else quote_with_backticks(item[1][0], server2_sql_mode) object1 = "{0}.{1}".format(q_db1, q_obj_name1) object2 = "{0}.{1}".format(q_db2, q_obj_name2) result = object_diff(server1, server2, object1, object2, options, item[0]) if result is not None: success = False if not force: return False return success mysql-utilities-1.6.4/mysql/utilities/command/show_rpl.py0000644001577100752670000000553412747670311023406 0ustar pb2usercommon# # Copyright (c) 2010, 2013, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the show replication topology functionality. """ import sys from mysql.utilities.common.topology_map import TopologyMap def show_topology(master_vals, options=None): """Show the slaves/topology map for a master. This method find the slaves attached to a server if it is a master. It can also discover the replication topology if the recurse option is True (default = False). It prints a tabular list of the master(s) and slaves found. If the show_list option is True, it will also print a list of the output (default = False). master_vals[in] Master connection in form user:passwd@host:port:socket or login-path:port:socket. options[in] dictionary of options recurse If True, check each slave found for additional slaves Default = False prompt_user If True, prompt user if slave connection fails with master connection parameters Default = False num_retries Number of times to retry a failed connection attempt Default = 0 quiet if True, print only the data Default = False format Format of list Default = Grid width width of report Default = 75 max_depth maximum depth of recursive search Default = None """ if options is None: options = {} topo = TopologyMap(master_vals, options) topo.generate_topology_map(options.get('max_depth', None)) if not options.get("quiet", False) and topo.depth(): print "\n# Replication Topology Graph" if not topo.slaves_found(): print "No slaves found." topo.print_graph() print if options.get("show_list", False): from mysql.utilities.common.format import print_list # make a list from the topology topology_list = topo.get_topology_map() print_list(sys.stdout, options.get("format", "GRID"), ["Master", "Slave"], topology_list, False, True) mysql-utilities-1.6.4/mysql/utilities/exception.py0000644001577100752670000000757412747670311022137 0ustar pb2usercommon# # Copyright (c) 2010, 2014, Oracle and/or its affiliates. All rights # reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # """ This file contains the exceptions used by MySQL Utilities and their libraries. """ class Error(Exception): """Error """ pass class UtilError(Exception): """General errors raised by command modules to user scripts. This exception class is used to report errors from MySQL utilities command modules and are used to communicate known errors to the user. """ def __init__(self, message, errno=0): super(UtilError, self).__init__() self.args = (message, errno) self.errmsg = message self.errno = errno class UtilDBError(UtilError): """Database errors raised when the mysql database server operation fails. """ def __init__(self, message, errno=0, db=None): UtilError.__init__(self, message, errno) self.db = db class UtilRplError(UtilError): """Replication errors raised during replication operations. """ def __init__(self, message, errno=0, master=None, slave=None): UtilError.__init__(self, message, errno) self.master = master self.slave = slave class UtilRplWarn(UtilError): """Replication warnings raised during replication operations. """ def __init__(self, message, errno=0, master=None, slave=None): UtilError.__init__(self, message, errno) self.master = master self.slave = slave class UtilBinlogError(UtilError): """Errors raised during binary log operations. """ def __init__(self, message, errno=0, filename=None, pos=0): UtilError.__init__(self.message, errno) self.file = filename self.pos = pos class UtilTestError(UtilError): """Errors during test execution of command or common module tests. This exception is used to raise and error and supply a return value for recording the test result. """ def __init__(self, message, errno=0, result=None): UtilError.__init__(self, message, errno) self.result = result class UtilDaemonError(UtilError): """POSIX daemon error. """ pass class FormatError(Error): """An entity was supplied in the wrong format.""" pass class EmptyResultError(Error): """An entity was supplied in the wrong format.""" pass class MUTLibError(Exception): """MUT errors This exception class is used to report errors from the testing subsystem. """ def __init__(self, message, options=None): super(MUTLibError, self).__init__() self.args = (message, options) self.errmsg = message self.options = options class LogParserError(UtilError): """LogParserError """ def __init__(self, message=''): super(LogParserError, self).__init__(message) class ConnectionValuesError(Exception): """Specific error raised by Server when values are not valid. This exception class is used to report errors when Cannot determine connection information type supplied by MySQL utilities """ def __init__(self, message, errno=0): super(ConnectionValuesError, self).__init__() self.args = (message, errno) self.errmsg = message self.errno = errno def __str__(self): return self.errmsg mysql-utilities-1.6.4/mysql/__init__.py0000644001577100752670000000000012747670311017636 0ustar pb2usercommonmysql-utilities-1.6.4/mysql/connector/0000755001577100752670000000000012747674052017537 5ustar pb2usercommonmysql-utilities-1.6.4/mysql/connector/custom_types.py0000644001577100752670000000320112717544565022645 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Custom Python types used by MySQL Connector/Python""" import sys class HexLiteral(str): """Class holding MySQL hex literals""" def __new__(cls, str_, charset='utf8'): if sys.version_info[0] == 2: hexed = ["%02x" % ord(i) for i in str_.encode(charset)] else: hexed = ["%02x" % i for i in str_.encode(charset)] obj = str.__new__(cls, ''.join(hexed)) obj.charset = charset obj.original = str_ return obj def __str__(self): return '0x' + self mysql-utilities-1.6.4/mysql/connector/__init__.py0000644001577100752670000001611612717544565021657 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2009, 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """ MySQL Connector/Python - MySQL driver written in Python """ try: import _mysql_connector # pylint: disable=F0401 from .connection_cext import CMySQLConnection except ImportError: HAVE_CEXT = False else: HAVE_CEXT = True from . import version from .connection import MySQLConnection from .errors import ( # pylint: disable=W0622 Error, Warning, InterfaceError, DatabaseError, NotSupportedError, DataError, IntegrityError, ProgrammingError, OperationalError, InternalError, custom_error_exception, PoolError) from .constants import FieldFlag, FieldType, CharacterSet, \ RefreshOption, ClientFlag from .dbapi import ( Date, Time, Timestamp, Binary, DateFromTicks, DateFromTicks, TimestampFromTicks, TimeFromTicks, STRING, BINARY, NUMBER, DATETIME, ROWID, apilevel, threadsafety, paramstyle) from .optionfiles import read_option_files _CONNECTION_POOLS = {} def _get_pooled_connection(**kwargs): """Return a pooled MySQL connection""" # If no pool name specified, generate one from .pooling import ( MySQLConnectionPool, generate_pool_name, CONNECTION_POOL_LOCK) try: pool_name = kwargs['pool_name'] except KeyError: pool_name = generate_pool_name(**kwargs) # Setup the pool, ensuring only 1 thread can update at a time with CONNECTION_POOL_LOCK: if pool_name not in _CONNECTION_POOLS: _CONNECTION_POOLS[pool_name] = MySQLConnectionPool(**kwargs) elif isinstance(_CONNECTION_POOLS[pool_name], MySQLConnectionPool): # pool_size must be the same check_size = _CONNECTION_POOLS[pool_name].pool_size if ('pool_size' in kwargs and kwargs['pool_size'] != check_size): raise PoolError("Size can not be changed " "for active pools.") # Return pooled connection try: return _CONNECTION_POOLS[pool_name].get_connection() except AttributeError: raise InterfaceError( "Failed getting connection from pool '{0}'".format(pool_name)) def _get_failover_connection(**kwargs): """Return a MySQL connection and try to failover if needed An InterfaceError is raise when no MySQL is available. ValueError is raised when the failover server configuration contains an illegal connection argument. Supported arguments are user, password, host, port, unix_socket and database. ValueError is also raised when the failover argument was not provided. Returns MySQLConnection instance. """ config = kwargs.copy() try: failover = config['failover'] except KeyError: raise ValueError('failover argument not provided') del config['failover'] support_cnx_args = set( ['user', 'password', 'host', 'port', 'unix_socket', 'database', 'pool_name', 'pool_size']) # First check if we can add all use the configuration for server in failover: diff = set(server.keys()) - support_cnx_args if diff: raise ValueError( "Unsupported connection argument {0} in failover: {1}".format( 's' if len(diff) > 1 else '', ', '.join(diff))) for server in failover: new_config = config.copy() new_config.update(server) try: return connect(**new_config) except Error: # If we failed to connect, we try the next server pass raise InterfaceError("Could not failover: no MySQL server available") def connect(*args, **kwargs): """Create or get a MySQL connection object In its simpliest form, Connect() will open a connection to a MySQL server and return a MySQLConnection object. When any connection pooling arguments are given, for example pool_name or pool_size, a pool is created or a previously one is used to return a PooledMySQLConnection. Returns MySQLConnection or PooledMySQLConnection. """ # Option files if 'option_files' in kwargs: new_config = read_option_files(**kwargs) return connect(**new_config) if all(['fabric' in kwargs, 'failover' in kwargs]): raise InterfaceError("fabric and failover arguments can not be used") if 'fabric' in kwargs: if 'pool_name' in kwargs: raise AttributeError("'pool_name' argument is not supported with " " MySQL Fabric. Use 'pool_size' instead.") from .fabric import connect as fabric_connect return fabric_connect(*args, **kwargs) # Failover if 'failover' in kwargs: return _get_failover_connection(**kwargs) # Pooled connections try: from .constants import CNX_POOL_ARGS if any([key in kwargs for key in CNX_POOL_ARGS]): return _get_pooled_connection(**kwargs) except NameError: # No pooling pass use_pure = kwargs.setdefault('use_pure', True) try: del kwargs['use_pure'] except KeyError: # Just making sure 'use_pure' is not kwargs pass if HAVE_CEXT and not use_pure: return CMySQLConnection(*args, **kwargs) else: return MySQLConnection(*args, **kwargs) Connect = connect # pylint: disable=C0103 __version_info__ = version.VERSION __version__ = version.VERSION_TEXT __all__ = [ 'MySQLConnection', 'Connect', 'custom_error_exception', # Some useful constants 'FieldType', 'FieldFlag', 'ClientFlag', 'CharacterSet', 'RefreshOption', 'HAVE_CEXT', # Error handling 'Error', 'Warning', 'InterfaceError', 'DatabaseError', 'NotSupportedError', 'DataError', 'IntegrityError', 'ProgrammingError', 'OperationalError', 'InternalError', # DBAPI PEP 249 required exports 'connect', 'apilevel', 'threadsafety', 'paramstyle', 'Date', 'Time', 'Timestamp', 'Binary', 'DateFromTicks', 'DateFromTicks', 'TimestampFromTicks', 'TimeFromTicks', 'STRING', 'BINARY', 'NUMBER', 'DATETIME', 'ROWID', # C Extension 'CMySQLConnection', ] mysql-utilities-1.6.4/mysql/connector/charsets.py0000644001577100752670000003004612717544565021732 0ustar pb2usercommon# -*- coding: utf-8 -*- # MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # This file was auto-generated. _GENERATED_ON = '2015-08-24' _MYSQL_VERSION = (5, 7, 8) """This module contains the MySQL Server Character Sets""" MYSQL_CHARACTER_SETS = [ # (character set name, collation, default) None, ("big5", "big5_chinese_ci", True), # 1 ("latin2", "latin2_czech_cs", False), # 2 ("dec8", "dec8_swedish_ci", True), # 3 ("cp850", "cp850_general_ci", True), # 4 ("latin1", "latin1_german1_ci", False), # 5 ("hp8", "hp8_english_ci", True), # 6 ("koi8r", "koi8r_general_ci", True), # 7 ("latin1", "latin1_swedish_ci", True), # 8 ("latin2", "latin2_general_ci", True), # 9 ("swe7", "swe7_swedish_ci", True), # 10 ("ascii", "ascii_general_ci", True), # 11 ("ujis", "ujis_japanese_ci", True), # 12 ("sjis", "sjis_japanese_ci", True), # 13 ("cp1251", "cp1251_bulgarian_ci", False), # 14 ("latin1", "latin1_danish_ci", False), # 15 ("hebrew", "hebrew_general_ci", True), # 16 None, ("tis620", "tis620_thai_ci", True), # 18 ("euckr", "euckr_korean_ci", True), # 19 ("latin7", "latin7_estonian_cs", False), # 20 ("latin2", "latin2_hungarian_ci", False), # 21 ("koi8u", "koi8u_general_ci", True), # 22 ("cp1251", "cp1251_ukrainian_ci", False), # 23 ("gb2312", "gb2312_chinese_ci", True), # 24 ("greek", "greek_general_ci", True), # 25 ("cp1250", "cp1250_general_ci", True), # 26 ("latin2", "latin2_croatian_ci", False), # 27 ("gbk", "gbk_chinese_ci", True), # 28 ("cp1257", "cp1257_lithuanian_ci", False), # 29 ("latin5", "latin5_turkish_ci", True), # 30 ("latin1", "latin1_german2_ci", False), # 31 ("armscii8", "armscii8_general_ci", True), # 32 ("utf8", "utf8_general_ci", True), # 33 ("cp1250", "cp1250_czech_cs", False), # 34 ("ucs2", "ucs2_general_ci", True), # 35 ("cp866", "cp866_general_ci", True), # 36 ("keybcs2", "keybcs2_general_ci", True), # 37 ("macce", "macce_general_ci", True), # 38 ("macroman", "macroman_general_ci", True), # 39 ("cp852", "cp852_general_ci", True), # 40 ("latin7", "latin7_general_ci", True), # 41 ("latin7", "latin7_general_cs", False), # 42 ("macce", "macce_bin", False), # 43 ("cp1250", "cp1250_croatian_ci", False), # 44 ("utf8mb4", "utf8mb4_general_ci", True), # 45 ("utf8mb4", "utf8mb4_bin", False), # 46 ("latin1", "latin1_bin", False), # 47 ("latin1", "latin1_general_ci", False), # 48 ("latin1", "latin1_general_cs", False), # 49 ("cp1251", "cp1251_bin", False), # 50 ("cp1251", "cp1251_general_ci", True), # 51 ("cp1251", "cp1251_general_cs", False), # 52 ("macroman", "macroman_bin", False), # 53 ("utf16", "utf16_general_ci", True), # 54 ("utf16", "utf16_bin", False), # 55 ("utf16le", "utf16le_general_ci", True), # 56 ("cp1256", "cp1256_general_ci", True), # 57 ("cp1257", "cp1257_bin", False), # 58 ("cp1257", "cp1257_general_ci", True), # 59 ("utf32", "utf32_general_ci", True), # 60 ("utf32", "utf32_bin", False), # 61 ("utf16le", "utf16le_bin", False), # 62 ("binary", "binary", True), # 63 ("armscii8", "armscii8_bin", False), # 64 ("ascii", "ascii_bin", False), # 65 ("cp1250", "cp1250_bin", False), # 66 ("cp1256", "cp1256_bin", False), # 67 ("cp866", "cp866_bin", False), # 68 ("dec8", "dec8_bin", False), # 69 ("greek", "greek_bin", False), # 70 ("hebrew", "hebrew_bin", False), # 71 ("hp8", "hp8_bin", False), # 72 ("keybcs2", "keybcs2_bin", False), # 73 ("koi8r", "koi8r_bin", False), # 74 ("koi8u", "koi8u_bin", False), # 75 None, ("latin2", "latin2_bin", False), # 77 ("latin5", "latin5_bin", False), # 78 ("latin7", "latin7_bin", False), # 79 ("cp850", "cp850_bin", False), # 80 ("cp852", "cp852_bin", False), # 81 ("swe7", "swe7_bin", False), # 82 ("utf8", "utf8_bin", False), # 83 ("big5", "big5_bin", False), # 84 ("euckr", "euckr_bin", False), # 85 ("gb2312", "gb2312_bin", False), # 86 ("gbk", "gbk_bin", False), # 87 ("sjis", "sjis_bin", False), # 88 ("tis620", "tis620_bin", False), # 89 ("ucs2", "ucs2_bin", False), # 90 ("ujis", "ujis_bin", False), # 91 ("geostd8", "geostd8_general_ci", True), # 92 ("geostd8", "geostd8_bin", False), # 93 ("latin1", "latin1_spanish_ci", False), # 94 ("cp932", "cp932_japanese_ci", True), # 95 ("cp932", "cp932_bin", False), # 96 ("eucjpms", "eucjpms_japanese_ci", True), # 97 ("eucjpms", "eucjpms_bin", False), # 98 ("cp1250", "cp1250_polish_ci", False), # 99 None, ("utf16", "utf16_unicode_ci", False), # 101 ("utf16", "utf16_icelandic_ci", False), # 102 ("utf16", "utf16_latvian_ci", False), # 103 ("utf16", "utf16_romanian_ci", False), # 104 ("utf16", "utf16_slovenian_ci", False), # 105 ("utf16", "utf16_polish_ci", False), # 106 ("utf16", "utf16_estonian_ci", False), # 107 ("utf16", "utf16_spanish_ci", False), # 108 ("utf16", "utf16_swedish_ci", False), # 109 ("utf16", "utf16_turkish_ci", False), # 110 ("utf16", "utf16_czech_ci", False), # 111 ("utf16", "utf16_danish_ci", False), # 112 ("utf16", "utf16_lithuanian_ci", False), # 113 ("utf16", "utf16_slovak_ci", False), # 114 ("utf16", "utf16_spanish2_ci", False), # 115 ("utf16", "utf16_roman_ci", False), # 116 ("utf16", "utf16_persian_ci", False), # 117 ("utf16", "utf16_esperanto_ci", False), # 118 ("utf16", "utf16_hungarian_ci", False), # 119 ("utf16", "utf16_sinhala_ci", False), # 120 ("utf16", "utf16_german2_ci", False), # 121 ("utf16", "utf16_croatian_ci", False), # 122 ("utf16", "utf16_unicode_520_ci", False), # 123 ("utf16", "utf16_vietnamese_ci", False), # 124 None, None, None, ("ucs2", "ucs2_unicode_ci", False), # 128 ("ucs2", "ucs2_icelandic_ci", False), # 129 ("ucs2", "ucs2_latvian_ci", False), # 130 ("ucs2", "ucs2_romanian_ci", False), # 131 ("ucs2", "ucs2_slovenian_ci", False), # 132 ("ucs2", "ucs2_polish_ci", False), # 133 ("ucs2", "ucs2_estonian_ci", False), # 134 ("ucs2", "ucs2_spanish_ci", False), # 135 ("ucs2", "ucs2_swedish_ci", False), # 136 ("ucs2", "ucs2_turkish_ci", False), # 137 ("ucs2", "ucs2_czech_ci", False), # 138 ("ucs2", "ucs2_danish_ci", False), # 139 ("ucs2", "ucs2_lithuanian_ci", False), # 140 ("ucs2", "ucs2_slovak_ci", False), # 141 ("ucs2", "ucs2_spanish2_ci", False), # 142 ("ucs2", "ucs2_roman_ci", False), # 143 ("ucs2", "ucs2_persian_ci", False), # 144 ("ucs2", "ucs2_esperanto_ci", False), # 145 ("ucs2", "ucs2_hungarian_ci", False), # 146 ("ucs2", "ucs2_sinhala_ci", False), # 147 ("ucs2", "ucs2_german2_ci", False), # 148 ("ucs2", "ucs2_croatian_ci", False), # 149 ("ucs2", "ucs2_unicode_520_ci", False), # 150 ("ucs2", "ucs2_vietnamese_ci", False), # 151 None, None, None, None, None, None, None, ("ucs2", "ucs2_general_mysql500_ci", False), # 159 ("utf32", "utf32_unicode_ci", False), # 160 ("utf32", "utf32_icelandic_ci", False), # 161 ("utf32", "utf32_latvian_ci", False), # 162 ("utf32", "utf32_romanian_ci", False), # 163 ("utf32", "utf32_slovenian_ci", False), # 164 ("utf32", "utf32_polish_ci", False), # 165 ("utf32", "utf32_estonian_ci", False), # 166 ("utf32", "utf32_spanish_ci", False), # 167 ("utf32", "utf32_swedish_ci", False), # 168 ("utf32", "utf32_turkish_ci", False), # 169 ("utf32", "utf32_czech_ci", False), # 170 ("utf32", "utf32_danish_ci", False), # 171 ("utf32", "utf32_lithuanian_ci", False), # 172 ("utf32", "utf32_slovak_ci", False), # 173 ("utf32", "utf32_spanish2_ci", False), # 174 ("utf32", "utf32_roman_ci", False), # 175 ("utf32", "utf32_persian_ci", False), # 176 ("utf32", "utf32_esperanto_ci", False), # 177 ("utf32", "utf32_hungarian_ci", False), # 178 ("utf32", "utf32_sinhala_ci", False), # 179 ("utf32", "utf32_german2_ci", False), # 180 ("utf32", "utf32_croatian_ci", False), # 181 ("utf32", "utf32_unicode_520_ci", False), # 182 ("utf32", "utf32_vietnamese_ci", False), # 183 None, None, None, None, None, None, None, None, ("utf8", "utf8_unicode_ci", False), # 192 ("utf8", "utf8_icelandic_ci", False), # 193 ("utf8", "utf8_latvian_ci", False), # 194 ("utf8", "utf8_romanian_ci", False), # 195 ("utf8", "utf8_slovenian_ci", False), # 196 ("utf8", "utf8_polish_ci", False), # 197 ("utf8", "utf8_estonian_ci", False), # 198 ("utf8", "utf8_spanish_ci", False), # 199 ("utf8", "utf8_swedish_ci", False), # 200 ("utf8", "utf8_turkish_ci", False), # 201 ("utf8", "utf8_czech_ci", False), # 202 ("utf8", "utf8_danish_ci", False), # 203 ("utf8", "utf8_lithuanian_ci", False), # 204 ("utf8", "utf8_slovak_ci", False), # 205 ("utf8", "utf8_spanish2_ci", False), # 206 ("utf8", "utf8_roman_ci", False), # 207 ("utf8", "utf8_persian_ci", False), # 208 ("utf8", "utf8_esperanto_ci", False), # 209 ("utf8", "utf8_hungarian_ci", False), # 210 ("utf8", "utf8_sinhala_ci", False), # 211 ("utf8", "utf8_german2_ci", False), # 212 ("utf8", "utf8_croatian_ci", False), # 213 ("utf8", "utf8_unicode_520_ci", False), # 214 ("utf8", "utf8_vietnamese_ci", False), # 215 None, None, None, None, None, None, None, ("utf8", "utf8_general_mysql500_ci", False), # 223 ("utf8mb4", "utf8mb4_unicode_ci", False), # 224 ("utf8mb4", "utf8mb4_icelandic_ci", False), # 225 ("utf8mb4", "utf8mb4_latvian_ci", False), # 226 ("utf8mb4", "utf8mb4_romanian_ci", False), # 227 ("utf8mb4", "utf8mb4_slovenian_ci", False), # 228 ("utf8mb4", "utf8mb4_polish_ci", False), # 229 ("utf8mb4", "utf8mb4_estonian_ci", False), # 230 ("utf8mb4", "utf8mb4_spanish_ci", False), # 231 ("utf8mb4", "utf8mb4_swedish_ci", False), # 232 ("utf8mb4", "utf8mb4_turkish_ci", False), # 233 ("utf8mb4", "utf8mb4_czech_ci", False), # 234 ("utf8mb4", "utf8mb4_danish_ci", False), # 235 ("utf8mb4", "utf8mb4_lithuanian_ci", False), # 236 ("utf8mb4", "utf8mb4_slovak_ci", False), # 237 ("utf8mb4", "utf8mb4_spanish2_ci", False), # 238 ("utf8mb4", "utf8mb4_roman_ci", False), # 239 ("utf8mb4", "utf8mb4_persian_ci", False), # 240 ("utf8mb4", "utf8mb4_esperanto_ci", False), # 241 ("utf8mb4", "utf8mb4_hungarian_ci", False), # 242 ("utf8mb4", "utf8mb4_sinhala_ci", False), # 243 ("utf8mb4", "utf8mb4_german2_ci", False), # 244 ("utf8mb4", "utf8mb4_croatian_ci", False), # 245 ("utf8mb4", "utf8mb4_unicode_520_ci", False), # 246 ("utf8mb4", "utf8mb4_vietnamese_ci", False), # 247 ("gb18030", "gb18030_chinese_ci", True), # 248 ("gb18030", "gb18030_bin", False), # 249 ("gb18030", "gb18030_unicode_520_ci", False), # 250 ] mysql-utilities-1.6.4/mysql/connector/errors.py0000644001577100752670000002376312717544565021442 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2009, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Python exceptions """ from . import utils from .locales import get_client_error from .catch23 import PY2 # _CUSTOM_ERROR_EXCEPTIONS holds custom exceptions and is ued by the # function custom_error_exception. _ERROR_EXCEPTIONS (at bottom of module) # is similar, but hardcoded exceptions. _CUSTOM_ERROR_EXCEPTIONS = {} def custom_error_exception(error=None, exception=None): """Define custom exceptions for MySQL server errors This function defines custom exceptions for MySQL server errors and returns the current set customizations. If error is a MySQL Server error number, then you have to pass also the exception class. The error argument can also be a dictionary in which case the key is the server error number, and value the exception to be raised. If none of the arguments are given, then custom_error_exception() will simply return the current set customizations. To reset the customizations, simply supply an empty dictionary. Examples: import mysql.connector from mysql.connector import errorcode # Server error 1028 should raise a DatabaseError mysql.connector.custom_error_exception( 1028, mysql.connector.DatabaseError) # Or using a dictionary: mysql.connector.custom_error_exception({ 1028: mysql.connector.DatabaseError, 1029: mysql.connector.OperationalError, }) # Reset mysql.connector.custom_error_exception({}) Returns a dictionary. """ global _CUSTOM_ERROR_EXCEPTIONS # pylint: disable=W0603 if isinstance(error, dict) and not len(error): _CUSTOM_ERROR_EXCEPTIONS = {} return _CUSTOM_ERROR_EXCEPTIONS if not error and not exception: return _CUSTOM_ERROR_EXCEPTIONS if not isinstance(error, (int, dict)): raise ValueError( "The error argument should be either an integer or dictionary") if isinstance(error, int): error = {error: exception} for errno, exception in error.items(): if not isinstance(errno, int): raise ValueError("error number should be an integer") try: if not issubclass(exception, Exception): raise TypeError except TypeError: raise ValueError("exception should be subclass of Exception") _CUSTOM_ERROR_EXCEPTIONS[errno] = exception return _CUSTOM_ERROR_EXCEPTIONS def get_mysql_exception(errno, msg=None, sqlstate=None): """Get the exception matching the MySQL error This function will return an exception based on the SQLState. The given message will be passed on in the returned exception. The exception returned can be customized using the mysql.connector.custom_error_exception() function. Returns an Exception """ try: return _CUSTOM_ERROR_EXCEPTIONS[errno]( msg=msg, errno=errno, sqlstate=sqlstate) except KeyError: # Error was not mapped to particular exception pass try: return _ERROR_EXCEPTIONS[errno]( msg=msg, errno=errno, sqlstate=sqlstate) except KeyError: # Error was not mapped to particular exception pass if not sqlstate: return DatabaseError(msg=msg, errno=errno) try: return _SQLSTATE_CLASS_EXCEPTION[sqlstate[0:2]]( msg=msg, errno=errno, sqlstate=sqlstate) except KeyError: # Return default InterfaceError return DatabaseError(msg=msg, errno=errno, sqlstate=sqlstate) def get_exception(packet): """Returns an exception object based on the MySQL error Returns an exception object based on the MySQL error in the given packet. Returns an Error-Object. """ errno = errmsg = None try: if packet[4] != 255: raise ValueError("Packet is not an error packet") except IndexError as err: return InterfaceError("Failed getting Error information (%r)" % err) sqlstate = None try: packet = packet[5:] (packet, errno) = utils.read_int(packet, 2) if packet[0] != 35: # Error without SQLState if isinstance(packet, (bytes, bytearray)): errmsg = packet.decode('utf8') else: errmsg = packet else: (packet, sqlstate) = utils.read_bytes(packet[1:], 5) sqlstate = sqlstate.decode('utf8') errmsg = packet.decode('utf8') except Exception as err: # pylint: disable=W0703 return InterfaceError("Failed getting Error information (%r)" % err) else: return get_mysql_exception(errno, errmsg, sqlstate) class Error(Exception): """Exception that is base class for all other error exceptions""" def __init__(self, msg=None, errno=None, values=None, sqlstate=None): super(Error, self).__init__() self.msg = msg self._full_msg = self.msg self.errno = errno or -1 self.sqlstate = sqlstate if not self.msg and (2000 <= self.errno < 3000): self.msg = get_client_error(self.errno) if values is not None: try: self.msg = self.msg % values except TypeError as err: self.msg = "{0} (Warning: {1})".format(self.msg, str(err)) elif not self.msg: self._full_msg = self.msg = 'Unknown error' if self.msg and self.errno != -1: fields = { 'errno': self.errno, 'msg': self.msg.encode('utf8') if PY2 else self.msg } if self.sqlstate: fmt = '{errno} ({state}): {msg}' fields['state'] = self.sqlstate else: fmt = '{errno}: {msg}' self._full_msg = fmt.format(**fields) self.args = (self.errno, self._full_msg, self.sqlstate) def __str__(self): return self._full_msg class Warning(Exception): # pylint: disable=W0622 """Exception for important warnings""" pass class InterfaceError(Error): """Exception for errors related to the interface""" pass class DatabaseError(Error): """Exception for errors related to the database""" pass class InternalError(DatabaseError): """Exception for errors internal database errors""" pass class OperationalError(DatabaseError): """Exception for errors related to the database's operation""" pass class ProgrammingError(DatabaseError): """Exception for errors programming errors""" pass class IntegrityError(DatabaseError): """Exception for errors regarding relational integrity""" pass class DataError(DatabaseError): """Exception for errors reporting problems with processed data""" pass class NotSupportedError(DatabaseError): """Exception for errors when an unsupported database feature was used""" pass class PoolError(Error): """Exception for errors relating to connection pooling""" pass class MySQLFabricError(Error): """Exception for errors relating to MySQL Fabric""" _SQLSTATE_CLASS_EXCEPTION = { '02': DataError, # no data '07': DatabaseError, # dynamic SQL error '08': OperationalError, # connection exception '0A': NotSupportedError, # feature not supported '21': DataError, # cardinality violation '22': DataError, # data exception '23': IntegrityError, # integrity constraint violation '24': ProgrammingError, # invalid cursor state '25': ProgrammingError, # invalid transaction state '26': ProgrammingError, # invalid SQL statement name '27': ProgrammingError, # triggered data change violation '28': ProgrammingError, # invalid authorization specification '2A': ProgrammingError, # direct SQL syntax error or access rule violation '2B': DatabaseError, # dependent privilege descriptors still exist '2C': ProgrammingError, # invalid character set name '2D': DatabaseError, # invalid transaction termination '2E': DatabaseError, # invalid connection name '33': DatabaseError, # invalid SQL descriptor name '34': ProgrammingError, # invalid cursor name '35': ProgrammingError, # invalid condition number '37': ProgrammingError, # dynamic SQL syntax error or access rule violation '3C': ProgrammingError, # ambiguous cursor name '3D': ProgrammingError, # invalid catalog name '3F': ProgrammingError, # invalid schema name '40': InternalError, # transaction rollback '42': ProgrammingError, # syntax error or access rule violation '44': InternalError, # with check option violation 'HZ': OperationalError, # remote database access 'XA': IntegrityError, '0K': OperationalError, 'HY': DatabaseError, # default when no SQLState provided by MySQL server } _ERROR_EXCEPTIONS = { 1243: ProgrammingError, 1210: ProgrammingError, 2002: InterfaceError, 2013: OperationalError, 2049: NotSupportedError, 2055: OperationalError, 2061: InterfaceError, } mysql-utilities-1.6.4/mysql/connector/cursor_cext.py0000644001577100752670000006230312717544565022457 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Cursor classes using the C Extension """ from collections import namedtuple import re import weakref from .abstracts import MySQLConnectionAbstract, MySQLCursorAbstract from .catch23 import PY2, isunicode from . import errors from .errorcode import CR_NO_RESULT_SET from .cursor import ( RE_PY_PARAM, RE_SQL_INSERT_STMT, RE_SQL_ON_DUPLICATE, RE_SQL_COMMENT, RE_SQL_INSERT_VALUES, RE_SQL_SPLIT_STMTS ) from _mysql_connector import MySQLInterfaceError # pylint: disable=F0401 class _ParamSubstitutor(object): """ Substitutes parameters into SQL statement. """ def __init__(self, params): self.params = params self.index = 0 def __call__(self, matchobj): index = self.index self.index += 1 try: return self.params[index] except IndexError: raise errors.ProgrammingError( "Not enough parameters for the SQL statement") @property def remaining(self): """Returns number of parameters remaining to be substituted""" return len(self.params) - self.index class CMySQLCursor(MySQLCursorAbstract): """Default cursor for interacting with MySQL using C Extension""" _raw = False _buffered = False _raw_as_string = False def __init__(self, connection): """Initialize""" MySQLCursorAbstract.__init__(self) self._insert_id = 0 self._warning_count = 0 self._warnings = None self._affected_rows = -1 self._rowcount = -1 self._nextrow = None self._executed = None self._executed_list = [] self._stored_results = [] if not isinstance(connection, MySQLConnectionAbstract): raise errors.InterfaceError(errno=2048) self._cnx = weakref.proxy(connection) def reset(self, free=True): """Reset the cursor When free is True (default) the result will be freed. """ self._rowcount = -1 self._nextrow = None self._affected_rows = -1 self._insert_id = 0 self._warning_count = 0 self._warnings = None self._warnings = None self._warning_count = 0 self._description = None self._executed = None self._executed_list = [] if free and self._cnx: self._cnx.free_result() super(CMySQLCursor, self).reset() def _fetch_warnings(self): """Fetch warnings Fetch warnings doing a SHOW WARNINGS. Can be called after getting the result. Returns a result set or None when there were no warnings. Raises errors.Error (or subclass) on errors. Returns list of tuples or None. """ warnings = [] try: # force freeing result self._cnx.consume_results() _ = self._cnx.cmd_query("SHOW WARNINGS") warnings = self._cnx.get_rows() self._cnx.consume_results() except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) except Exception as err: raise errors.InterfaceError( "Failed getting warnings; {0}".format(str(err))) if warnings: return warnings return None def _handle_warnings(self): """Handle possible warnings after all results are consumed""" if self._cnx.get_warnings is True and self._warning_count: self._warnings = self._fetch_warnings() def _handle_result(self, result): """Handles the result after statement execution""" if 'columns' in result: self._description = result['columns'] self._rowcount = 0 self._handle_resultset() else: self._insert_id = result['insert_id'] self._warning_count = result['warning_count'] self._affected_rows = result['affected_rows'] self._rowcount = -1 self._handle_warnings() if self._cnx.raise_on_warnings is True and self._warnings: raise errors.get_mysql_exception(*self._warnings[0][1:3]) def _handle_resultset(self): """Handle a result set""" pass def _handle_eof(self): """Handle end of reading the result Raises an errors.Error on errors. """ self._warning_count = self._cnx.warning_count self._handle_warnings() if self._cnx.raise_on_warnings is True and self._warnings: raise errors.get_mysql_exception(*self._warnings[0][1:3]) if not self._cnx.more_results: self._cnx.free_result() def _execute_iter(self): """Generator returns MySQLCursor objects for multiple statements Deprecated: use nextset() method directly. This method is only used when multiple statements are executed by the execute() method. It uses zip() to make an iterator from the given query_iter (result of MySQLConnection.cmd_query_iter()) and the list of statements that were executed. """ executed_list = RE_SQL_SPLIT_STMTS.split(self._executed) i = 0 self._executed = executed_list[i] yield self while True: try: if not self.nextset(): raise StopIteration except errors.InterfaceError as exc: # Result without result set if exc.errno != CR_NO_RESULT_SET: raise i += 1 self._executed = executed_list[i].strip() yield self return def execute(self, operation, params=(), multi=False): """Execute given statement using given parameters Deprecated: The multi argument is not needed and nextset() should be used to handle multiple result sets. """ if not operation: return None if not self._cnx: raise errors.ProgrammingError("Cursor is not connected") self._cnx.handle_unread_result() stmt = '' self.reset() try: if isunicode(operation): stmt = operation.encode(self._cnx.python_charset) else: stmt = operation except (UnicodeDecodeError, UnicodeEncodeError) as err: raise errors.ProgrammingError(str(err)) if params: prepared = self._cnx.prepare_for_mysql(params) if isinstance(prepared, dict): for key, value in prepared.items(): if PY2: stmt = stmt.replace("%({0})s".format(key), value) else: stmt = stmt.replace("%({0})s".format(key).encode(), value) elif isinstance(prepared, (list, tuple)): psub = _ParamSubstitutor(prepared) stmt = RE_PY_PARAM.sub(psub, stmt) if psub.remaining != 0: raise errors.ProgrammingError( "Not all parameters were used in the SQL statement") try: result = self._cnx.cmd_query(stmt, raw=self._raw, buffered=self._buffered, raw_as_string=self._raw_as_string) except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) self._executed = stmt self._handle_result(result) if multi: return self._execute_iter() return None def _batch_insert(self, operation, seq_params): """Implements multi row insert""" def remove_comments(match): """Remove comments from INSERT statements. This function is used while removing comments from INSERT statements. If the matched string is a comment not enclosed by quotes, it returns an empty string, else the string itself. """ if match.group(1): return "" else: return match.group(2) tmp = re.sub(RE_SQL_ON_DUPLICATE, '', re.sub(RE_SQL_COMMENT, remove_comments, operation)) matches = re.search(RE_SQL_INSERT_VALUES, tmp) if not matches: raise errors.InterfaceError( "Failed rewriting statement for multi-row INSERT. " "Check SQL syntax." ) fmt = matches.group(1).encode(self._cnx.charset) values = [] try: stmt = operation.encode(self._cnx.charset) for params in seq_params: tmp = fmt prepared = self._cnx.prepare_for_mysql(params) if isinstance(prepared, dict): for key, value in prepared.items(): tmp = tmp.replace("%({0})s".format(key).encode(), value) elif isinstance(prepared, (list, tuple)): psub = _ParamSubstitutor(prepared) tmp = RE_PY_PARAM.sub(psub, tmp) if psub.remaining != 0: raise errors.ProgrammingError( "Not all parameters were used in the SQL statement") values.append(tmp) if fmt in stmt: stmt = stmt.replace(fmt, b','.join(values), 1) self._executed = stmt return stmt else: return None except (UnicodeDecodeError, UnicodeEncodeError) as err: raise errors.ProgrammingError(str(err)) except Exception as err: raise errors.InterfaceError( "Failed executing the operation; %s" % err) def executemany(self, operation, seq_params): """Execute the given operation multiple times""" if not operation or not seq_params: return None if not self._cnx: raise errors.ProgrammingError("Cursor is not connected") self._cnx.handle_unread_result() if not isinstance(seq_params, (list, tuple)): raise errors.ProgrammingError( "Parameters for query must be list or tuple.") # Optimize INSERTs by batching them if re.match(RE_SQL_INSERT_STMT, operation): if not seq_params: self._rowcount = 0 return stmt = self._batch_insert(operation, seq_params) if stmt is not None: return self.execute(stmt) rowcnt = 0 try: for params in seq_params: self.execute(operation, params) try: while True: if self._description: rowcnt += len(self._cnx.get_rows()) else: rowcnt += self._affected_rows if not self.nextset(): break except StopIteration: # No more results pass except (ValueError, TypeError) as err: raise errors.ProgrammingError( "Failed executing the operation; {0}".format(err)) self._rowcount = rowcnt @property def description(self): """Returns description of columns in a result""" return self._description @property def rowcount(self): """Returns the number of rows produced or affected""" if self._rowcount == -1: return self._affected_rows else: return self._rowcount @property def lastrowid(self): """Returns the value generated for an AUTO_INCREMENT column""" return self._insert_id def close(self): """Close the cursor The result will be freed. """ if not self._cnx: return False self._cnx.handle_unread_result() self._warnings = None self._cnx = None return True def callproc(self, procname, args=()): """Calls a stored procedure with the given arguments""" if not procname or not isinstance(procname, str): raise ValueError("procname must be a string") if not isinstance(args, (tuple, list)): raise ValueError("args must be a sequence") argfmt = "@_{name}_arg{index}" self._stored_results = [] results = [] try: argnames = [] argtypes = [] if args: for idx, arg in enumerate(args): argname = argfmt.format(name=procname, index=idx + 1) argnames.append(argname) if isinstance(arg, tuple): argtypes.append(" CAST({0} AS {1})".format(argname, arg[1])) self.execute("SET {0}=%s".format(argname), (arg[0],)) else: argtypes.append(argname) self.execute("SET {0}=%s".format(argname), (arg,)) call = "CALL {0}({1})".format(procname, ','.join(argnames)) result = self._cnx.cmd_query(call, raw=self._raw, raw_as_string=self._raw_as_string) results = [] while self._cnx.result_set_available: result = self._cnx.fetch_eof_columns() # pylint: disable=W0212 if self._raw: cur = CMySQLCursorBufferedRaw(self._cnx._get_self()) else: cur = CMySQLCursorBuffered(self._cnx._get_self()) cur._executed = "(a result of {0})".format(call) cur._handle_result(result) # pylint: enable=W0212 results.append(cur) self._cnx.next_result() self._stored_results = results self._handle_eof() if argnames: self.reset() select = "SELECT {0}".format(','.join(argtypes)) self.execute(select) return self.fetchone() else: return tuple() except errors.Error: raise except Exception as err: raise errors.InterfaceError( "Failed calling stored routine; {0}".format(err)) def nextset(self): """Skip to the next available result set""" if not self._cnx.next_result(): self.reset(free=True) return None self.reset(free=False) if not self._cnx.result_set_available: eof = self._cnx.fetch_eof_status() self._handle_result(eof) raise errors.InterfaceError(errno=CR_NO_RESULT_SET) self._handle_result(self._cnx.fetch_eof_columns()) return True def fetchall(self): """Returns all rows of a query result set Returns a list of tuples. """ if not self._cnx.unread_result: raise errors.InterfaceError("No result set to fetch from.") rows = self._cnx.get_rows() if self._nextrow: rows.insert(0, self._nextrow) if not rows: self._handle_eof() return [] self._rowcount += len(rows) self._handle_eof() return rows def fetchmany(self, size=1): """Returns the next set of rows of a result set""" if self._nextrow: rows = [self._nextrow] size -= 1 else: rows = [] if size and self._cnx.unread_result: rows.extend(self._cnx.get_rows(size)) if rows: self._nextrow = self._cnx.get_row() if not self._nextrow and not self._cnx.more_results: self._cnx.free_result() if not rows: self._handle_eof() return [] self._rowcount += len(rows) return rows def fetchone(self): """Returns next row of a query result set""" row = self._nextrow if not row and self._cnx.unread_result: row = self._cnx.get_row() if row: self._nextrow = self._cnx.get_row() if not self._nextrow and not self._cnx.more_results: self._cnx.free_result() else: self._handle_eof() return None self._rowcount += 1 return row def __iter__(self): """Iteration over the result set Iteration over the result set which calls self.fetchone() and returns the next row. """ return iter(self.fetchone, None) def stored_results(self): """Returns an iterator for stored results This method returns an iterator over results which are stored when callproc() is called. The iterator will provide MySQLCursorBuffered instances. Returns a iterator. """ for i in range(len(self._stored_results)): yield self._stored_results[i] self._stored_results = [] if PY2: def next(self): """Used for iterating over the result set.""" return self.__next__() def __next__(self): """Iteration over the result set Used for iterating over the result set. Calls self.fetchone() to get the next row. Raises StopIteration when no more rows are available. """ try: row = self.fetchone() except errors.InterfaceError: raise StopIteration if not row: raise StopIteration return row @property def column_names(self): """Returns column names This property returns the columns names as a tuple. Returns a tuple. """ if not self.description: return () return tuple([d[0] for d in self.description]) @property def statement(self): """Returns the executed statement This property returns the executed statement. When multiple statements were executed, the current statement in the iterator will be returned. """ try: return self._executed.strip().decode('utf8') except AttributeError: return self._executed.strip() @property def with_rows(self): """Returns whether the cursor could have rows returned This property returns True when column descriptions are available and possibly also rows, which will need to be fetched. Returns True or False. """ if self.description: return True return False def __str__(self): fmt = "{class_name}: {stmt}" if self._executed: try: executed = self._executed.decode('utf-8') except AttributeError: executed = self._executed if len(executed) > 40: executed = executed[:40] + '..' else: executed = '(Nothing executed yet)' return fmt.format(class_name=self.__class__.__name__, stmt=executed) class CMySQLCursorBuffered(CMySQLCursor): """Cursor using C Extension buffering results""" def __init__(self, connection): """Initialize""" super(CMySQLCursorBuffered, self).__init__(connection) self._rows = None self._next_row = 0 def _handle_resultset(self): """Handle a result set""" self._rows = self._cnx.get_rows() self._next_row = 0 self._rowcount = len(self._rows) self._handle_eof() def reset(self, free=True): """Reset the cursor to default""" self._rows = None self._next_row = 0 super(CMySQLCursorBuffered, self).reset(free=free) def _fetch_row(self): """Returns the next row in the result set Returns a tuple or None. """ row = None try: row = self._rows[self._next_row] except IndexError: return None else: self._next_row += 1 return row def fetchall(self): if self._rows is None: raise errors.InterfaceError("No result set to fetch from.") res = self._rows[self._next_row:] self._next_row = len(self._rows) return res def fetchmany(self, size=1): res = [] cnt = size or self.arraysize while cnt > 0: cnt -= 1 row = self._fetch_row() if row: res.append(row) else: break return res def fetchone(self): return self._fetch_row() class CMySQLCursorRaw(CMySQLCursor): """Cursor using C Extension return raw results""" _raw = True class CMySQLCursorBufferedRaw(CMySQLCursorBuffered): """Cursor using C Extension buffering raw results""" _raw = True class CMySQLCursorDict(CMySQLCursor): """Cursor using C Extension returning rows as dictionaries""" _raw = False def fetchone(self): """Returns all rows of a query result set """ row = super(CMySQLCursorDict, self).fetchone() if row: return dict(zip(self.column_names, row)) else: return None def fetchmany(self, size=1): """Returns next set of rows as list of dictionaries""" res = super(CMySQLCursorDict, self).fetchmany(size=size) return [dict(zip(self.column_names, row)) for row in res] def fetchall(self): """Returns all rows of a query result set as list of dictionaries""" res = super(CMySQLCursorDict, self).fetchall() return [dict(zip(self.column_names, row)) for row in res] class CMySQLCursorBufferedDict(CMySQLCursorBuffered): """Cursor using C Extension buffering and returning rows as dictionaries""" _raw = False def _fetch_row(self): row = super(CMySQLCursorBufferedDict, self)._fetch_row() if row: return dict(zip(self.column_names, row)) else: return None def fetchall(self): res = super(CMySQLCursorBufferedDict, self).fetchall() return [dict(zip(self.column_names, row)) for row in res] class CMySQLCursorNamedTuple(CMySQLCursor): """Cursor using C Extension returning rows as named tuples""" def _handle_resultset(self): """Handle a result set""" super(CMySQLCursorNamedTuple, self)._handle_resultset() # pylint: disable=W0201 self.named_tuple = namedtuple('Row', self.column_names) # pylint: enable=W0201 def fetchone(self): """Returns all rows of a query result set """ row = super(CMySQLCursorNamedTuple, self).fetchone() if row: return self.named_tuple(*row) else: return None def fetchmany(self, size=1): """Returns next set of rows as list of named tuples""" res = super(CMySQLCursorNamedTuple, self).fetchmany(size=size) return [self.named_tuple(*row) for row in res] def fetchall(self): """Returns all rows of a query result set as list of named tuples""" res = super(CMySQLCursorNamedTuple, self).fetchall() return [self.named_tuple(*row) for row in res] class CMySQLCursorBufferedNamedTuple(CMySQLCursorBuffered): """Cursor using C Extension buffering and returning rows as named tuples""" def _handle_resultset(self): super(CMySQLCursorBufferedNamedTuple, self)._handle_resultset() # pylint: disable=W0201 self.named_tuple = namedtuple('Row', self.column_names) # pylint: enable=W0201 def _fetch_row(self): row = super(CMySQLCursorBufferedNamedTuple, self)._fetch_row() if row: return self.named_tuple(*row) else: return None def fetchall(self): res = super(CMySQLCursorBufferedNamedTuple, self).fetchall() return [self.named_tuple(*row) for row in res] class CMySQLCursorPrepared(CMySQLCursor): """Cursor using Prepare Statement """ def __init__(self, connection): super(CMySQLCursorPrepared, self).__init__(connection) raise NotImplementedError( "Alternative: Use connection.MySQLCursorPrepared") mysql-utilities-1.6.4/mysql/connector/abstracts.py0000644001577100752670000011534312717544565022110 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Module gathering all abstract base classes""" # Issue with pylint and NotImplementedError # pylint: disable=R0921 from abc import ABCMeta, abstractmethod, abstractproperty import re import time from .catch23 import make_abc, BYTE_TYPES from .conversion import MySQLConverterBase from .constants import ClientFlag, CharacterSet, DEFAULT_CONFIGURATION from .optionfiles import MySQLOptionsParser from . import errors @make_abc(ABCMeta) class MySQLConnectionAbstract(object): """Abstract class for classes connecting to a MySQL server""" def __init__(self, **kwargs): """Initialize""" self._client_flags = ClientFlag.get_default() self._charset_id = 33 self._sql_mode = None self._time_zone = None self._autocommit = False self._server_version = None self._handshake = None self._user = '' self._password = '' self._database = '' self._host = '127.0.0.1' self._port = 3306 self._unix_socket = None self._client_host = '' self._client_port = 0 self._ssl = {} self._force_ipv6 = False self._use_unicode = True self._get_warnings = False self._raise_on_warnings = False self._connection_timeout = None self._buffered = False self._unread_result = False self._have_next_result = False self._raw = False self._in_transaction = False self._prepared_statements = None self._ssl_active = False self._auth_plugin = None self._pool_config_version = None self.converter = None self._converter_class = None self._compress = False self._consume_results = False def _get_self(self): """Return self for weakref.proxy This method is used when the original object is needed when using weakref.proxy. """ return self def _read_option_files(self, config): """ Read option files for connection parameters. Checks if connection arguments contain option file arguments, and then reads option files accordingly. """ if 'option_files' in config: try: if isinstance(config['option_groups'], str): config['option_groups'] = [config['option_groups']] groups = config['option_groups'] del config['option_groups'] except KeyError: groups = ['client', 'connector_python'] if isinstance(config['option_files'], str): config['option_files'] = [config['option_files']] option_parser = MySQLOptionsParser(list(config['option_files']), keep_dashes=False) del config['option_files'] config_from_file = option_parser.get_groups_as_dict_with_priority( *groups) config_options = {} for group in groups: try: for option, value in config_from_file[group].items(): try: if option == 'socket': option = 'unix_socket' # pylint: disable=W0104 DEFAULT_CONFIGURATION[option] # pylint: enable=W0104 if (option not in config_options or config_options[option][1] <= value[1]): config_options[option] = value except KeyError: if group is 'connector_python': raise AttributeError("Unsupported argument " "'{0}'".format(option)) except KeyError: continue for option, value in config_options.items(): if option not in config: try: config[option] = eval(value[0]) # pylint: disable=W0123 except (NameError, SyntaxError): config[option] = value[0] return config @property def user(self): """User used while connecting to MySQL""" return self._user @property def server_host(self): """MySQL server IP address or name""" return self._host @property def server_port(self): "MySQL server TCP/IP port" return self._port @property def unix_socket(self): "MySQL Unix socket file location" return self._unix_socket @abstractproperty def database(self): """Get the current database""" pass @database.setter def database(self, value): """Set the current database""" self.cmd_query("USE %s" % value) @property def can_consume_results(self): """Returns whether to consume results""" return self._consume_results def config(self, **kwargs): """Configure the MySQL Connection This method allows you to configure the MySQLConnection instance. Raises on errors. """ config = kwargs.copy() if 'dsn' in config: raise errors.NotSupportedError("Data source name is not supported") # Read option files self._read_option_files(config) # Configure how we handle MySQL warnings try: self.get_warnings = config['get_warnings'] del config['get_warnings'] except KeyError: pass # Leave what was set or default try: self.raise_on_warnings = config['raise_on_warnings'] del config['raise_on_warnings'] except KeyError: pass # Leave what was set or default # Configure client flags try: default = ClientFlag.get_default() self.set_client_flags(config['client_flags'] or default) del config['client_flags'] except KeyError: pass # Missing client_flags-argument is OK try: if config['compress']: self._compress = True self.set_client_flags([ClientFlag.COMPRESS]) except KeyError: pass # Missing compress argument is OK try: if not config['allow_local_infile']: self.set_client_flags([-ClientFlag.LOCAL_FILES]) except KeyError: pass # Missing allow_local_infile argument is OK try: if not config['consume_results']: self._consume_results = False else: self._consume_results = True except KeyError: self._consume_results = False # Configure character set and collation if 'charset' in config or 'collation' in config: try: charset = config['charset'] del config['charset'] except KeyError: charset = None try: collation = config['collation'] del config['collation'] except KeyError: collation = None self._charset_id = CharacterSet.get_charset_info(charset, collation)[0] # Set converter class try: self.set_converter_class(config['converter_class']) except KeyError: pass # Using default converter class except TypeError: raise AttributeError("Converter class should be a subclass " "of conversion.MySQLConverterBase.") # Compatible configuration with other drivers compat_map = [ # (,) ('db', 'database'), ('passwd', 'password'), ('connect_timeout', 'connection_timeout'), ] for compat, translate in compat_map: try: if translate not in config: config[translate] = config[compat] del config[compat] except KeyError: pass # Missing compat argument is OK # Configure login information if 'user' in config or 'password' in config: try: user = config['user'] del config['user'] except KeyError: user = self._user try: password = config['password'] del config['password'] except KeyError: password = self._password self.set_login(user, password) # Check network locations try: self._port = int(config['port']) del config['port'] except KeyError: pass # Missing port argument is OK except ValueError: raise errors.InterfaceError( "TCP/IP port number should be an integer") # Other configuration set_ssl_flag = False for key, value in config.items(): try: DEFAULT_CONFIGURATION[key] except KeyError: raise AttributeError("Unsupported argument '{0}'".format(key)) # SSL Configuration if key.startswith('ssl_'): set_ssl_flag = True self._ssl.update({key.replace('ssl_', ''): value}) else: attribute = '_' + key try: setattr(self, attribute, value.strip()) except AttributeError: setattr(self, attribute, value) if set_ssl_flag: if 'verify_cert' not in self._ssl: self._ssl['verify_cert'] = \ DEFAULT_CONFIGURATION['ssl_verify_cert'] # Make sure both ssl_key/ssl_cert are set, or neither (XOR) if 'ca' not in self._ssl or self._ssl['ca'] is None: raise AttributeError( "Missing ssl_ca argument.") if bool('key' in self._ssl) != bool('cert' in self._ssl): raise AttributeError( "ssl_key and ssl_cert need to be both " "specified, or neither." ) # Make sure key/cert are set to None elif not set(('key', 'cert')) <= set(self._ssl): self._ssl['key'] = None self._ssl['cert'] = None elif (self._ssl['key'] is None) != (self._ssl['cert'] is None): raise AttributeError( "ssl_key and ssl_cert need to be both " "set, or neither." ) self.set_client_flags([ClientFlag.SSL]) def _check_server_version(self, server_version): """Check the MySQL version This method will check the MySQL version and raise an InterfaceError when it is not supported or invalid. It will return the version as a tuple with major, minor and patch. Raises InterfaceError if invalid server version. Returns tuple """ if isinstance(server_version, BYTE_TYPES): server_version = server_version.decode() # pylint: disable=W1401 regex_ver = re.compile(r"^(\d{1,2})\.(\d{1,2})\.(\d{1,3})(.*)") # pylint: enable=W1401 match = regex_ver.match(server_version) if not match: raise errors.InterfaceError("Failed parsing MySQL version") version = tuple([int(v) for v in match.groups()[0:3]]) if 'fabric' in match.group(4).lower(): if version < (1, 5): raise errors.InterfaceError( "MySQL Fabric '{0}' is not supported".format( server_version)) elif version < (4, 1): raise errors.InterfaceError( "MySQL Version '{0}' is not supported.".format(server_version)) return version def get_server_version(self): """Get the MySQL version This method returns the MySQL server version as a tuple. If not previously connected, it will return None. Returns a tuple or None. """ return self._server_version def get_server_info(self): """Get the original MySQL version information This method returns the original MySQL server as text. If not previously connected, it will return None. Returns a string or None. """ try: return self._handshake['server_version_original'] except (TypeError, KeyError): return None @abstractproperty def in_transaction(self): """MySQL session has started a transaction""" pass def set_client_flags(self, flags): """Set the client flags The flags-argument can be either an int or a list (or tuple) of ClientFlag-values. If it is an integer, it will set client_flags to flags as is. If flags is a list (or tuple), each flag will be set or unset when it's negative. set_client_flags([ClientFlag.FOUND_ROWS,-ClientFlag.LONG_FLAG]) Raises ProgrammingError when the flags argument is not a set or an integer bigger than 0. Returns self.client_flags """ if isinstance(flags, int) and flags > 0: self._client_flags = flags elif isinstance(flags, (tuple, list)): for flag in flags: if flag < 0: self._client_flags &= ~abs(flag) else: self._client_flags |= flag else: raise errors.ProgrammingError( "set_client_flags expect integer (>0) or set") return self._client_flags def isset_client_flag(self, flag): """Check if a client flag is set""" if (self._client_flags & flag) > 0: return True return False @property def time_zone(self): """Get the current time zone""" return self.info_query("SELECT @@session.time_zone")[0] @time_zone.setter def time_zone(self, value): """Set the time zone""" self.cmd_query("SET @@session.time_zone = '{0}'".format(value)) self._time_zone = value @property def sql_mode(self): """Get the SQL mode""" return self.info_query("SELECT @@session.sql_mode")[0] @sql_mode.setter def sql_mode(self, value): """Set the SQL mode This method sets the SQL Mode for the current connection. The value argument can be either a string with comma separate mode names, or a sequence of mode names. It is good practice to use the constants class SQLMode: from mysql.connector.constants import SQLMode cnx.sql_mode = [SQLMode.NO_ZERO_DATE, SQLMode.REAL_AS_FLOAT] """ if isinstance(value, (list, tuple)): value = ','.join(value) self.cmd_query("SET @@session.sql_mode = '{0}'".format(value)) self._sql_mode = value @abstractmethod def info_query(self, query): """Send a query which only returns 1 row""" pass def set_login(self, username=None, password=None): """Set login information for MySQL Set the username and/or password for the user connecting to the MySQL Server. """ if username is not None: self._user = username.strip() else: self._user = '' if password is not None: self._password = password else: self._password = '' def set_unicode(self, value=True): """Toggle unicode mode Set whether we return string fields as unicode or not. Default is True. """ self._use_unicode = value if self.converter: self.converter.set_unicode(value) @property def autocommit(self): """Get whether autocommit is on or off""" value = self.info_query("SELECT @@session.autocommit")[0] return True if value == 1 else False @autocommit.setter def autocommit(self, value): """Toggle autocommit""" switch = 'ON' if value else 'OFF' self.cmd_query("SET @@session.autocommit = {0}".format(switch)) self._autocommit = value @property def get_warnings(self): """Get whether this connection retrieves warnings automatically This method returns whether this connection retrieves warnings automatically. Returns True, or False when warnings are not retrieved. """ return self._get_warnings @get_warnings.setter def get_warnings(self, value): """Set whether warnings should be automatically retrieved The toggle-argument must be a boolean. When True, cursors for this connection will retrieve information about warnings (if any). Raises ValueError on error. """ if not isinstance(value, bool): raise ValueError("Expected a boolean type") self._get_warnings = value @property def raise_on_warnings(self): """Get whether this connection raises an error on warnings This method returns whether this connection will raise errors when MySQL reports warnings. Returns True or False. """ return self._raise_on_warnings @raise_on_warnings.setter def raise_on_warnings(self, value): """Set whether warnings raise an error The toggle-argument must be a boolean. When True, cursors for this connection will raise an error when MySQL reports warnings. Raising on warnings implies retrieving warnings automatically. In other words: warnings will be set to True. If set to False, warnings will be also set to False. Raises ValueError on error. """ if not isinstance(value, bool): raise ValueError("Expected a boolean type") self._raise_on_warnings = value self._get_warnings = value @property def unread_result(self): """Get whether there is an unread result This method is used by cursors to check whether another cursor still needs to retrieve its result set. Returns True, or False when there is no unread result. """ return self._unread_result @unread_result.setter def unread_result(self, value): """Set whether there is an unread result This method is used by cursors to let other cursors know there is still a result set that needs to be retrieved. Raises ValueError on errors. """ if not isinstance(value, bool): raise ValueError("Expected a boolean type") self._unread_result = value @property def charset(self): """Returns the character set for current connection This property returns the character set name of the current connection. The server is queried when the connection is active. If not connected, the configured character set name is returned. Returns a string. """ return CharacterSet.get_info(self._charset_id)[0] @property def python_charset(self): """Returns the Python character set for current connection This property returns the character set name of the current connection. Note that, unlike property charset, this checks if the previously set character set is supported by Python and if not, it returns the equivalent character set that Python supports. Returns a string. """ encoding = CharacterSet.get_info(self._charset_id)[0] if encoding in ('utf8mb4', 'binary'): return 'utf8' else: return encoding def set_charset_collation(self, charset=None, collation=None): """Sets the character set and collation for the current connection This method sets the character set and collation to be used for the current connection. The charset argument can be either the name of a character set as a string, or the numerical equivalent as defined in constants.CharacterSet. When the collation is not given, the default will be looked up and used. For example, the following will set the collation for the latin1 character set to latin1_general_ci: set_charset('latin1','latin1_general_ci') """ if charset: if isinstance(charset, int): self._charset_id = charset (self._charset_id, charset_name, collation_name) = \ CharacterSet.get_charset_info(charset) elif isinstance(charset, str): (self._charset_id, charset_name, collation_name) = \ CharacterSet.get_charset_info(charset, collation) else: raise ValueError( "charset should be either integer, string or None") elif collation: (self._charset_id, charset_name, collation_name) = \ CharacterSet.get_charset_info(collation=collation) self._execute_query("SET NAMES '{0}' COLLATE '{1}'".format( charset_name, collation_name)) try: # Required for C Extension self.set_character_set_name(charset_name) # pylint: disable=E1101 except AttributeError: # Not required for pure Python connection pass if self.converter: self.converter.set_charset(charset_name) @property def collation(self): """Returns the collation for current connection This property returns the collation name of the current connection. The server is queried when the connection is active. If not connected, the configured collation name is returned. Returns a string. """ return CharacterSet.get_charset_info(self._charset_id)[2] @abstractmethod def _do_handshake(self): """Gather information of the MySQL server before authentication""" pass @abstractmethod def _open_connection(self): """Open the connection to the MySQL server""" pass def _post_connection(self): """Executes commands after connection has been established This method executes commands after the connection has been established. Some setting like autocommit, character set, and SQL mode are set using this method. """ self.set_charset_collation(self._charset_id) self.autocommit = self._autocommit if self._time_zone: self.time_zone = self._time_zone if self._sql_mode: self.sql_mode = self._sql_mode @abstractmethod def disconnect(self): """Disconnect from the MySQL server""" pass close = disconnect def connect(self, **kwargs): """Connect to the MySQL server This method sets up the connection to the MySQL server. If no arguments are given, it will use the already configured or default values. """ if len(kwargs) > 0: self.config(**kwargs) self.disconnect() self._open_connection() self._post_connection() def reconnect(self, attempts=1, delay=0): """Attempt to reconnect to the MySQL server The argument attempts should be the number of times a reconnect is tried. The delay argument is the number of seconds to wait between each retry. You may want to set the number of attempts higher and use delay when you expect the MySQL server to be down for maintenance or when you expect the network to be temporary unavailable. Raises InterfaceError on errors. """ counter = 0 while counter != attempts: counter = counter + 1 try: self.disconnect() self.connect() if self.is_connected(): break except Exception as err: # pylint: disable=W0703 if counter == attempts: msg = "Can not reconnect to MySQL after {0} "\ "attempt(s): {1}".format(attempts, str(err)) raise errors.InterfaceError(msg) if delay > 0: time.sleep(delay) @abstractmethod def is_connected(self): """Reports whether the connection to MySQL Server is available""" pass @abstractmethod def ping(self, reconnect=False, attempts=1, delay=0): """Check availability of the MySQL server""" pass @abstractmethod def commit(self): """Commit current transaction""" pass @abstractmethod def cursor(self, buffered=None, raw=None, prepared=None, cursor_class=None, dictionary=None, named_tuple=None): """Instantiates and returns a cursor""" pass @abstractmethod def _execute_query(self, query): """Execute a query""" pass @abstractmethod def rollback(self): """Rollback current transaction""" pass def start_transaction(self, consistent_snapshot=False, isolation_level=None, readonly=None): """Start a transaction This method explicitly starts a transaction sending the START TRANSACTION statement to the MySQL server. You can optionally set whether there should be a consistent snapshot, which isolation level you need or which access mode i.e. READ ONLY or READ WRITE. For example, to start a transaction with isolation level SERIALIZABLE, you would do the following: >>> cnx = mysql.connector.connect(..) >>> cnx.start_transaction(isolation_level='SERIALIZABLE') Raises ProgrammingError when a transaction is already in progress and when ValueError when isolation_level specifies an Unknown level. """ if self.in_transaction: raise errors.ProgrammingError("Transaction already in progress") if isolation_level: level = isolation_level.strip().replace('-', ' ').upper() levels = ['READ UNCOMMITTED', 'READ COMMITTED', 'REPEATABLE READ', 'SERIALIZABLE'] if level not in levels: raise ValueError( 'Unknown isolation level "{0}"'.format(isolation_level)) self._execute_query( "SET TRANSACTION ISOLATION LEVEL {0}".format(level)) if readonly is not None: if self._server_version < (5, 6, 5): raise ValueError( "MySQL server version {0} does not support " "this feature".format(self._server_version)) if readonly: access_mode = 'READ ONLY' else: access_mode = 'READ WRITE' self._execute_query( "SET TRANSACTION {0}".format(access_mode)) query = "START TRANSACTION" if consistent_snapshot: query += " WITH CONSISTENT SNAPSHOT" self.cmd_query(query) def reset_session(self, user_variables=None, session_variables=None): """Clears the current active session This method resets the session state, if the MySQL server is 5.7.3 or later active session will be reset without re-authenticating. For other server versions session will be reset by re-authenticating. It is possible to provide a sequence of variables and their values to be set after clearing the session. This is possible for both user defined variables and session variables. This method takes two arguments user_variables and session_variables which are dictionaries. Raises OperationalError if not connected, InternalError if there are unread results and InterfaceError on errors. """ if not self.is_connected(): raise errors.OperationalError("MySQL Connection not available.") try: self.cmd_reset_connection() except (errors.NotSupportedError, NotImplementedError): if self._compress: raise errors.NotSupportedError( "Reset session is not supported with compression for " "MySQL server version 5.7.2 or earlier.") else: self.cmd_change_user(self._user, self._password, self._database, self._charset_id) if user_variables or session_variables: cur = self.cursor() if user_variables: for key, value in user_variables.items(): cur.execute("SET @`{0}` = %s".format(key), (value,)) if session_variables: for key, value in session_variables.items(): cur.execute("SET SESSION `{0}` = %s".format(key), (value,)) cur.close() def set_converter_class(self, convclass): """ Set the converter class to be used. This should be a class overloading methods and members of conversion.MySQLConverter. """ if convclass and issubclass(convclass, MySQLConverterBase): charset_name = CharacterSet.get_info(self._charset_id)[0] self._converter_class = convclass self.converter = convclass(charset_name, self._use_unicode) else: raise TypeError("Converter class should be a subclass " "of conversion.MySQLConverterBase.") @abstractmethod def get_rows(self, count=None, binary=False, columns=None): """Get all rows returned by the MySQL server""" pass def cmd_init_db(self, database): """Change the current database""" raise NotImplementedError def cmd_query(self, query, raw=False, buffered=False, raw_as_string=False): """Send a query to the MySQL server""" raise NotImplementedError def cmd_query_iter(self, statements): """Send one or more statements to the MySQL server""" raise NotImplementedError def cmd_refresh(self, options): """Send the Refresh command to the MySQL server""" raise NotImplementedError def cmd_quit(self): """Close the current connection with the server""" raise NotImplementedError def cmd_shutdown(self, shutdown_type=None): """Shut down the MySQL Server""" raise NotImplementedError def cmd_statistics(self): """Send the statistics command to the MySQL Server""" raise NotImplementedError def cmd_process_info(self): """Get the process list of the MySQL Server This method is a placeholder to notify that the PROCESS_INFO command is not supported by raising the NotSupportedError. The command "SHOW PROCESSLIST" should be send using the cmd_query()-method or using the INFORMATION_SCHEMA database. Raises NotSupportedError exception """ raise errors.NotSupportedError( "Not implemented. Use SHOW PROCESSLIST or INFORMATION_SCHEMA") def cmd_process_kill(self, mysql_pid): """Kill a MySQL process""" raise NotImplementedError def cmd_debug(self): """Send the DEBUG command""" raise NotImplementedError def cmd_ping(self): """Send the PING command""" raise NotImplementedError def cmd_change_user(self, username='', password='', database='', charset=33): """Change the current logged in user""" raise NotImplementedError def cmd_stmt_prepare(self, statement): """Prepare a MySQL statement""" raise NotImplementedError def cmd_stmt_execute(self, statement_id, data=(), parameters=(), flags=0): """Execute a prepared MySQL statement""" raise NotImplementedError def cmd_stmt_close(self, statement_id): """Deallocate a prepared MySQL statement""" raise NotImplementedError def cmd_stmt_send_long_data(self, statement_id, param_id, data): """Send data for a column""" raise NotImplementedError def cmd_stmt_reset(self, statement_id): """Reset data for prepared statement sent as long data""" raise NotImplementedError def cmd_reset_connection(self): """Resets the session state without re-authenticating""" raise NotImplementedError @make_abc(ABCMeta) class MySQLCursorAbstract(object): """Abstract cursor class Abstract class defining cursor class with method and members required by the Python Database API Specification v2.0. """ def __init__(self): """Initialization""" self._description = None self._rowcount = -1 self._last_insert_id = None self._warnings = None self.arraysize = 1 @abstractmethod def callproc(self, procname, args=()): """Calls a stored procedure with the given arguments The arguments will be set during this session, meaning they will be called like ___arg where is an enumeration (+1) of the arguments. Coding Example: 1) Defining the Stored Routine in MySQL: CREATE PROCEDURE multiply(IN pFac1 INT, IN pFac2 INT, OUT pProd INT) BEGIN SET pProd := pFac1 * pFac2; END 2) Executing in Python: args = (5,5,0) # 0 is to hold pprod cursor.callproc('multiply', args) print(cursor.fetchone()) Does not return a value, but a result set will be available when the CALL-statement execute successfully. Raises exceptions when something is wrong. """ pass @abstractmethod def close(self): """Close the cursor.""" pass @abstractmethod def execute(self, operation, params=(), multi=False): """Executes the given operation Executes the given operation substituting any markers with the given parameters. For example, getting all rows where id is 5: cursor.execute("SELECT * FROM t1 WHERE id = %s", (5,)) The multi argument should be set to True when executing multiple statements in one operation. If not set and multiple results are found, an InterfaceError will be raised. If warnings where generated, and connection.get_warnings is True, then self._warnings will be a list containing these warnings. Returns an iterator when multi is True, otherwise None. """ pass @abstractmethod def executemany(self, operation, seqparams): """Execute the given operation multiple times The executemany() method will execute the operation iterating over the list of parameters in seq_params. Example: Inserting 3 new employees and their phone number data = [ ('Jane','555-001'), ('Joe', '555-001'), ('John', '555-003') ] stmt = "INSERT INTO employees (name, phone) VALUES ('%s','%s')" cursor.executemany(stmt, data) INSERT statements are optimized by batching the data, that is using the MySQL multiple rows syntax. Results are discarded. If they are needed, consider looping over data using the execute() method. """ pass @abstractmethod def fetchone(self): """Returns next row of a query result set Returns a tuple or None. """ pass @abstractmethod def fetchmany(self, size=1): """Returns the next set of rows of a query result, returning a list of tuples. When no more rows are available, it returns an empty list. The number of rows returned can be specified using the size argument, which defaults to one """ pass @abstractmethod def fetchall(self): """Returns all rows of a query result set Returns a list of tuples. """ pass def nextset(self): """Not Implemented.""" pass def setinputsizes(self, sizes): """Not Implemented.""" pass def setoutputsize(self, size, column=None): """Not Implemented.""" pass def reset(self, free=True): """Reset the cursor to default""" pass @abstractproperty def description(self): """Returns description of columns in a result This property returns a list of tuples describing the columns in in a result set. A tuple is described as follows:: (column_name, type, None, None, None, None, null_ok, column_flags) # Addition to PEP-249 specs Returns a list of tuples. """ return self._description @abstractproperty def rowcount(self): """Returns the number of rows produced or affected This property returns the number of rows produced by queries such as a SELECT, or affected rows when executing DML statements like INSERT or UPDATE. Note that for non-buffered cursors it is impossible to know the number of rows produced before having fetched them all. For those, the number of rows will be -1 right after execution, and incremented when fetching rows. Returns an integer. """ return self._rowcount @abstractproperty def lastrowid(self): """Returns the value generated for an AUTO_INCREMENT column Returns the value generated for an AUTO_INCREMENT column by the previous INSERT or UPDATE statement or None when there is no such value available. Returns a long value or None. """ return self._last_insert_id def fetchwarnings(self): """Returns Warnings.""" return self._warnings mysql-utilities-1.6.4/mysql/connector/utils.py0000644001577100752670000002171112717544565021255 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2009, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Utilities """ from __future__ import print_function __MYSQL_DEBUG__ = False import struct from .catch23 import struct_unpack def intread(buf): """Unpacks the given buffer to an integer""" try: if isinstance(buf, int): return buf length = len(buf) if length == 1: return buf[0] elif length <= 4: tmp = buf + b'\x00'*(4-length) return struct_unpack(' 255: raise ValueError('int1store requires 0 <= i <= 255') else: return bytearray(struct.pack(' 65535: raise ValueError('int2store requires 0 <= i <= 65535') else: return bytearray(struct.pack(' 16777215: raise ValueError('int3store requires 0 <= i <= 16777215') else: return bytearray(struct.pack(' 4294967295: raise ValueError('int4store requires 0 <= i <= 4294967295') else: return bytearray(struct.pack(' 18446744073709551616: raise ValueError('int8store requires 0 <= i <= 2^64') else: return bytearray(struct.pack(' 18446744073709551616: raise ValueError('intstore requires 0 <= i <= 2^64') if i <= 255: formed_string = int1store elif i <= 65535: formed_string = int2store elif i <= 16777215: formed_string = int3store elif i <= 4294967295: formed_string = int4store else: formed_string = int8store return formed_string(i) def lc_int(i): """ Takes an unsigned integer and packs it as bytes, with the information of how much bytes the encoded int takes. """ if i < 0 or i > 18446744073709551616: raise ValueError('Requires 0 <= i <= 2^64') if i < 251: return bytearray(struct.pack(' +----------+------------------------- | length | a string goes here +----------+------------------------- If the string is bigger than 250, then it looks like this: <- 1b -><- 2/3/8 -> +------+-----------+------------------------- | type | length | a string goes here +------+-----------+------------------------- if type == \xfc: length is code in next 2 bytes elif type == \xfd: length is code in next 3 bytes elif type == \xfe: length is code in next 8 bytes NULL has a special value. If the buffer starts with \xfb then it's a NULL and we return None as value. Returns a tuple (trucated buffer, bytes). """ if buf[0] == 251: # \xfb # NULL value return (buf[1:], None) length = lsize = 0 fst = buf[0] if fst <= 250: # \xFA length = fst return (buf[1 + length:], buf[1:length + 1]) elif fst == 252: lsize = 2 elif fst == 253: lsize = 3 if fst == 254: lsize = 8 length = intread(buf[1:lsize + 1]) return (buf[lsize + length + 1:], buf[lsize + 1:length + lsize + 1]) def read_lc_string_list(buf): """Reads all length encoded strings from the given buffer Returns a list of bytes """ byteslst = [] sizes = {252: 2, 253: 3, 254: 8} buf_len = len(buf) pos = 0 while pos < buf_len: first = buf[pos] if first == 255: # Special case when MySQL error 1317 is returned by MySQL. # We simply return None. return None if first == 251: # NULL value byteslst.append(None) pos += 1 else: if first <= 250: length = first byteslst.append(buf[(pos + 1):length + (pos + 1)]) pos += 1 + length else: lsize = 0 try: lsize = sizes[first] except KeyError: return None length = intread(buf[(pos + 1):lsize + (pos + 1)]) byteslst.append( buf[pos + 1 + lsize:length + lsize + (pos + 1)]) pos += 1 + lsize + length return tuple(byteslst) def read_string(buf, end=None, size=None): """ Reads a string up until a character or for a given size. Returns a tuple (trucated buffer, string). """ if end is None and size is None: raise ValueError('read_string() needs either end or size') if end is not None: try: idx = buf.index(end) except ValueError: raise ValueError("end byte not present in buffer") return (buf[idx + 1:], buf[0:idx]) elif size is not None: return read_bytes(buf, size) raise ValueError('read_string() needs either end or size (weird)') def read_int(buf, size): """Read an integer from buffer Returns a tuple (truncated buffer, int) """ try: res = intread(buf[0:size]) except: raise return (buf[size:], res) def read_lc_int(buf): """ Takes a buffer and reads an length code string from the start. Returns a tuple with buffer less the integer and the integer read. """ if not buf: raise ValueError("Empty buffer.") lcbyte = buf[0] if lcbyte == 251: return (buf[1:], None) elif lcbyte < 251: return (buf[1:], int(lcbyte)) elif lcbyte == 252: return (buf[3:], struct_unpack(' 0: digest = _digest_buffer(abuffer[0:limit]) else: digest = _digest_buffer(abuffer) print(prefix + ': ' + digest) else: print(_digest_buffer(abuffer)) mysql-utilities-1.6.4/mysql/connector/constants.py0000644001577100752670000005475112717544565022143 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2009, 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Various MySQL constants and character sets """ from .errors import ProgrammingError from .charsets import MYSQL_CHARACTER_SETS MAX_PACKET_LENGTH = 16777215 NET_BUFFER_LENGTH = 8192 MAX_MYSQL_TABLE_COLUMNS = 4096 DEFAULT_CONFIGURATION = { 'database': None, 'user': '', 'password': '', 'host': '127.0.0.1', 'port': 3306, 'unix_socket': None, 'use_unicode': True, 'charset': 'utf8', 'collation': None, 'converter_class': None, 'autocommit': False, 'time_zone': None, 'sql_mode': None, 'get_warnings': False, 'raise_on_warnings': False, 'connection_timeout': None, 'client_flags': 0, 'compress': False, 'buffered': False, 'raw': False, 'ssl_ca': None, 'ssl_cert': None, 'ssl_key': None, 'ssl_verify_cert': False, 'passwd': None, 'db': None, 'connect_timeout': None, 'dsn': None, 'force_ipv6': False, 'auth_plugin': None, 'allow_local_infile': True, 'consume_results': False, } CNX_POOL_ARGS = ('pool_name', 'pool_size', 'pool_reset_session') CNX_FABRIC_ARGS = ['fabric_host', 'fabric_username', 'fabric_password', 'fabric_port', 'fabric_connect_attempts', 'fabric_connect_delay', 'fabric_report_errors', 'fabric_ssl_ca', 'fabric_ssl_key', 'fabric_ssl_cert', 'fabric_user'] def flag_is_set(flag, flags): """Checks if the flag is set Returns boolean""" if (flags & flag) > 0: return True return False class _Constants(object): """ Base class for constants """ prefix = '' desc = {} def __new__(cls): raise TypeError("Can not instanciate from %s" % cls.__name__) @classmethod def get_desc(cls, name): """Get description of given constant""" try: return cls.desc[name][1] except: return None @classmethod def get_info(cls, num): """Get information about given constant""" for name, info in cls.desc.items(): if info[0] == num: return name return None @classmethod def get_full_info(cls): """get full information about given constant""" res = () try: res = ["%s : %s" % (k, v[1]) for k, v in cls.desc.items()] except Exception as err: # pylint: disable=W0703 res = ('No information found in constant class.%s' % err) return res class _Flags(_Constants): """Base class for classes describing flags """ @classmethod def get_bit_info(cls, value): """Get the name of all bits set Returns a list of strings.""" res = [] for name, info in cls.desc.items(): if value & info[0]: res.append(name) return res class FieldType(_Constants): """MySQL Field Types """ prefix = 'FIELD_TYPE_' DECIMAL = 0x00 TINY = 0x01 SHORT = 0x02 LONG = 0x03 FLOAT = 0x04 DOUBLE = 0x05 NULL = 0x06 TIMESTAMP = 0x07 LONGLONG = 0x08 INT24 = 0x09 DATE = 0x0a TIME = 0x0b DATETIME = 0x0c YEAR = 0x0d NEWDATE = 0x0e VARCHAR = 0x0f BIT = 0x10 NEWDECIMAL = 0xf6 ENUM = 0xf7 SET = 0xf8 TINY_BLOB = 0xf9 MEDIUM_BLOB = 0xfa LONG_BLOB = 0xfb BLOB = 0xfc VAR_STRING = 0xfd STRING = 0xfe GEOMETRY = 0xff desc = { 'DECIMAL': (0x00, 'DECIMAL'), 'TINY': (0x01, 'TINY'), 'SHORT': (0x02, 'SHORT'), 'LONG': (0x03, 'LONG'), 'FLOAT': (0x04, 'FLOAT'), 'DOUBLE': (0x05, 'DOUBLE'), 'NULL': (0x06, 'NULL'), 'TIMESTAMP': (0x07, 'TIMESTAMP'), 'LONGLONG': (0x08, 'LONGLONG'), 'INT24': (0x09, 'INT24'), 'DATE': (0x0a, 'DATE'), 'TIME': (0x0b, 'TIME'), 'DATETIME': (0x0c, 'DATETIME'), 'YEAR': (0x0d, 'YEAR'), 'NEWDATE': (0x0e, 'NEWDATE'), 'VARCHAR': (0x0f, 'VARCHAR'), 'BIT': (0x10, 'BIT'), 'NEWDECIMAL': (0xf6, 'NEWDECIMAL'), 'ENUM': (0xf7, 'ENUM'), 'SET': (0xf8, 'SET'), 'TINY_BLOB': (0xf9, 'TINY_BLOB'), 'MEDIUM_BLOB': (0xfa, 'MEDIUM_BLOB'), 'LONG_BLOB': (0xfb, 'LONG_BLOB'), 'BLOB': (0xfc, 'BLOB'), 'VAR_STRING': (0xfd, 'VAR_STRING'), 'STRING': (0xfe, 'STRING'), 'GEOMETRY': (0xff, 'GEOMETRY'), } @classmethod def get_string_types(cls): """Get the list of all string types""" return [ cls.VARCHAR, cls.ENUM, cls.VAR_STRING, cls.STRING, ] @classmethod def get_binary_types(cls): """Get the list of all binary types""" return [ cls.TINY_BLOB, cls.MEDIUM_BLOB, cls.LONG_BLOB, cls.BLOB, ] @classmethod def get_number_types(cls): """Get the list of all number types""" return [ cls.DECIMAL, cls.NEWDECIMAL, cls.TINY, cls.SHORT, cls.LONG, cls.FLOAT, cls.DOUBLE, cls.LONGLONG, cls.INT24, cls.BIT, cls.YEAR, ] @classmethod def get_timestamp_types(cls): """Get the list of all timestamp types""" return [ cls.DATETIME, cls.TIMESTAMP, ] class FieldFlag(_Flags): """MySQL Field Flags Field flags as found in MySQL sources mysql-src/include/mysql_com.h """ _prefix = '' NOT_NULL = 1 << 0 PRI_KEY = 1 << 1 UNIQUE_KEY = 1 << 2 MULTIPLE_KEY = 1 << 3 BLOB = 1 << 4 UNSIGNED = 1 << 5 ZEROFILL = 1 << 6 BINARY = 1 << 7 ENUM = 1 << 8 AUTO_INCREMENT = 1 << 9 TIMESTAMP = 1 << 10 SET = 1 << 11 NO_DEFAULT_VALUE = 1 << 12 ON_UPDATE_NOW = 1 << 13 NUM = 1 << 14 PART_KEY = 1 << 15 GROUP = 1 << 14 # SAME AS NUM !!!!!!!???? UNIQUE = 1 << 16 BINCMP = 1 << 17 GET_FIXED_FIELDS = 1 << 18 FIELD_IN_PART_FUNC = 1 << 19 FIELD_IN_ADD_INDEX = 1 << 20 FIELD_IS_RENAMED = 1 << 21 desc = { 'NOT_NULL': (1 << 0, "Field can't be NULL"), 'PRI_KEY': (1 << 1, "Field is part of a primary key"), 'UNIQUE_KEY': (1 << 2, "Field is part of a unique key"), 'MULTIPLE_KEY': (1 << 3, "Field is part of a key"), 'BLOB': (1 << 4, "Field is a blob"), 'UNSIGNED': (1 << 5, "Field is unsigned"), 'ZEROFILL': (1 << 6, "Field is zerofill"), 'BINARY': (1 << 7, "Field is binary "), 'ENUM': (1 << 8, "field is an enum"), 'AUTO_INCREMENT': (1 << 9, "field is a autoincrement field"), 'TIMESTAMP': (1 << 10, "Field is a timestamp"), 'SET': (1 << 11, "field is a set"), 'NO_DEFAULT_VALUE': (1 << 12, "Field doesn't have default value"), 'ON_UPDATE_NOW': (1 << 13, "Field is set to NOW on UPDATE"), 'NUM': (1 << 14, "Field is num (for clients)"), 'PART_KEY': (1 << 15, "Intern; Part of some key"), 'GROUP': (1 << 14, "Intern: Group field"), # Same as NUM 'UNIQUE': (1 << 16, "Intern: Used by sql_yacc"), 'BINCMP': (1 << 17, "Intern: Used by sql_yacc"), 'GET_FIXED_FIELDS': (1 << 18, "Used to get fields in item tree"), 'FIELD_IN_PART_FUNC': (1 << 19, "Field part of partition func"), 'FIELD_IN_ADD_INDEX': (1 << 20, "Intern: Field used in ADD INDEX"), 'FIELD_IS_RENAMED': (1 << 21, "Intern: Field is being renamed"), } class ServerCmd(_Constants): """MySQL Server Commands """ _prefix = 'COM_' SLEEP = 0 QUIT = 1 INIT_DB = 2 QUERY = 3 FIELD_LIST = 4 CREATE_DB = 5 DROP_DB = 6 REFRESH = 7 SHUTDOWN = 8 STATISTICS = 9 PROCESS_INFO = 10 CONNECT = 11 PROCESS_KILL = 12 DEBUG = 13 PING = 14 TIME = 15 DELAYED_INSERT = 16 CHANGE_USER = 17 BINLOG_DUMP = 18 TABLE_DUMP = 19 CONNECT_OUT = 20 REGISTER_SLAVE = 21 STMT_PREPARE = 22 STMT_EXECUTE = 23 STMT_SEND_LONG_DATA = 24 STMT_CLOSE = 25 STMT_RESET = 26 SET_OPTION = 27 STMT_FETCH = 28 DAEMON = 29 BINLOG_DUMP_GTID = 30 RESET_CONNECTION = 31 desc = { 'SLEEP': (0, 'SLEEP'), 'QUIT': (1, 'QUIT'), 'INIT_DB': (2, 'INIT_DB'), 'QUERY': (3, 'QUERY'), 'FIELD_LIST': (4, 'FIELD_LIST'), 'CREATE_DB': (5, 'CREATE_DB'), 'DROP_DB': (6, 'DROP_DB'), 'REFRESH': (7, 'REFRESH'), 'SHUTDOWN': (8, 'SHUTDOWN'), 'STATISTICS': (9, 'STATISTICS'), 'PROCESS_INFO': (10, 'PROCESS_INFO'), 'CONNECT': (11, 'CONNECT'), 'PROCESS_KILL': (12, 'PROCESS_KILL'), 'DEBUG': (13, 'DEBUG'), 'PING': (14, 'PING'), 'TIME': (15, 'TIME'), 'DELAYED_INSERT': (16, 'DELAYED_INSERT'), 'CHANGE_USER': (17, 'CHANGE_USER'), 'BINLOG_DUMP': (18, 'BINLOG_DUMP'), 'TABLE_DUMP': (19, 'TABLE_DUMP'), 'CONNECT_OUT': (20, 'CONNECT_OUT'), 'REGISTER_SLAVE': (21, 'REGISTER_SLAVE'), 'STMT_PREPARE': (22, 'STMT_PREPARE'), 'STMT_EXECUTE': (23, 'STMT_EXECUTE'), 'STMT_SEND_LONG_DATA': (24, 'STMT_SEND_LONG_DATA'), 'STMT_CLOSE': (25, 'STMT_CLOSE'), 'STMT_RESET': (26, 'STMT_RESET'), 'SET_OPTION': (27, 'SET_OPTION'), 'STMT_FETCH': (28, 'STMT_FETCH'), 'DAEMON': (29, 'DAEMON'), 'BINLOG_DUMP_GTID': (30, 'BINLOG_DUMP_GTID'), 'RESET_CONNECTION': (31, 'RESET_CONNECTION'), } class ClientFlag(_Flags): """MySQL Client Flags Client options as found in the MySQL sources mysql-src/include/mysql_com.h """ LONG_PASSWD = 1 << 0 FOUND_ROWS = 1 << 1 LONG_FLAG = 1 << 2 CONNECT_WITH_DB = 1 << 3 NO_SCHEMA = 1 << 4 COMPRESS = 1 << 5 ODBC = 1 << 6 LOCAL_FILES = 1 << 7 IGNORE_SPACE = 1 << 8 PROTOCOL_41 = 1 << 9 INTERACTIVE = 1 << 10 SSL = 1 << 11 IGNORE_SIGPIPE = 1 << 12 TRANSACTIONS = 1 << 13 RESERVED = 1 << 14 SECURE_CONNECTION = 1 << 15 MULTI_STATEMENTS = 1 << 16 MULTI_RESULTS = 1 << 17 PS_MULTI_RESULTS = 1 << 18 PLUGIN_AUTH = 1 << 19 CONNECT_ARGS = 1 << 20 PLUGIN_AUTH_LENENC_CLIENT_DATA = 1 << 21 CAN_HANDLE_EXPIRED_PASSWORDS = 1 << 22 SSL_VERIFY_SERVER_CERT = 1 << 30 REMEMBER_OPTIONS = 1 << 31 desc = { 'LONG_PASSWD': (1 << 0, 'New more secure passwords'), 'FOUND_ROWS': (1 << 1, 'Found instead of affected rows'), 'LONG_FLAG': (1 << 2, 'Get all column flags'), 'CONNECT_WITH_DB': (1 << 3, 'One can specify db on connect'), 'NO_SCHEMA': (1 << 4, "Don't allow database.table.column"), 'COMPRESS': (1 << 5, 'Can use compression protocol'), 'ODBC': (1 << 6, 'ODBC client'), 'LOCAL_FILES': (1 << 7, 'Can use LOAD DATA LOCAL'), 'IGNORE_SPACE': (1 << 8, "Ignore spaces before ''"), 'PROTOCOL_41': (1 << 9, 'New 4.1 protocol'), 'INTERACTIVE': (1 << 10, 'This is an interactive client'), 'SSL': (1 << 11, 'Switch to SSL after handshake'), 'IGNORE_SIGPIPE': (1 << 12, 'IGNORE sigpipes'), 'TRANSACTIONS': (1 << 13, 'Client knows about transactions'), 'RESERVED': (1 << 14, 'Old flag for 4.1 protocol'), 'SECURE_CONNECTION': (1 << 15, 'New 4.1 authentication'), 'MULTI_STATEMENTS': (1 << 16, 'Enable/disable multi-stmt support'), 'MULTI_RESULTS': (1 << 17, 'Enable/disable multi-results'), 'SSL_VERIFY_SERVER_CERT': (1 << 30, ''), 'REMEMBER_OPTIONS': (1 << 31, ''), } default = [ LONG_PASSWD, LONG_FLAG, CONNECT_WITH_DB, PROTOCOL_41, TRANSACTIONS, SECURE_CONNECTION, MULTI_STATEMENTS, MULTI_RESULTS, LOCAL_FILES, ] @classmethod def get_default(cls): """Get the default client options set Returns a flag with all the default client options set""" flags = 0 for option in cls.default: flags |= option return flags class ServerFlag(_Flags): """MySQL Server Flags Server flags as found in the MySQL sources mysql-src/include/mysql_com.h """ _prefix = 'SERVER_' STATUS_IN_TRANS = 1 << 0 STATUS_AUTOCOMMIT = 1 << 1 MORE_RESULTS_EXISTS = 1 << 3 QUERY_NO_GOOD_INDEX_USED = 1 << 4 QUERY_NO_INDEX_USED = 1 << 5 STATUS_CURSOR_EXISTS = 1 << 6 STATUS_LAST_ROW_SENT = 1 << 7 STATUS_DB_DROPPED = 1 << 8 STATUS_NO_BACKSLASH_ESCAPES = 1 << 9 desc = { 'SERVER_STATUS_IN_TRANS': (1 << 0, 'Transaction has started'), 'SERVER_STATUS_AUTOCOMMIT': (1 << 1, 'Server in auto_commit mode'), 'SERVER_MORE_RESULTS_EXISTS': (1 << 3, 'Multi query - ' 'next query exists'), 'SERVER_QUERY_NO_GOOD_INDEX_USED': (1 << 4, ''), 'SERVER_QUERY_NO_INDEX_USED': (1 << 5, ''), 'SERVER_STATUS_CURSOR_EXISTS': (1 << 6, ''), 'SERVER_STATUS_LAST_ROW_SENT': (1 << 7, ''), 'SERVER_STATUS_DB_DROPPED': (1 << 8, 'A database was dropped'), 'SERVER_STATUS_NO_BACKSLASH_ESCAPES': (1 << 9, ''), } class RefreshOption(_Constants): """MySQL Refresh command options Options used when sending the COM_REFRESH server command. """ _prefix = 'REFRESH_' GRANT = 1 << 0 LOG = 1 << 1 TABLES = 1 << 2 HOST = 1 << 3 STATUS = 1 << 4 THREADS = 1 << 5 SLAVE = 1 << 6 desc = { 'GRANT': (1 << 0, 'Refresh grant tables'), 'LOG': (1 << 1, 'Start on new log file'), 'TABLES': (1 << 2, 'close all tables'), 'HOSTS': (1 << 3, 'Flush host cache'), 'STATUS': (1 << 4, 'Flush status variables'), 'THREADS': (1 << 5, 'Flush thread cache'), 'SLAVE': (1 << 6, 'Reset master info and restart slave thread'), } class ShutdownType(_Constants): """MySQL Shutdown types Shutdown types used by the COM_SHUTDOWN server command. """ _prefix = '' SHUTDOWN_DEFAULT = 0 SHUTDOWN_WAIT_CONNECTIONS = 1 SHUTDOWN_WAIT_TRANSACTIONS = 2 SHUTDOWN_WAIT_UPDATES = 8 SHUTDOWN_WAIT_ALL_BUFFERS = 16 SHUTDOWN_WAIT_CRITICAL_BUFFERS = 17 KILL_QUERY = 254 KILL_CONNECTION = 255 desc = { 'SHUTDOWN_DEFAULT': ( SHUTDOWN_DEFAULT, "defaults to SHUTDOWN_WAIT_ALL_BUFFERS"), 'SHUTDOWN_WAIT_CONNECTIONS': ( SHUTDOWN_WAIT_CONNECTIONS, "wait for existing connections to finish"), 'SHUTDOWN_WAIT_TRANSACTIONS': ( SHUTDOWN_WAIT_TRANSACTIONS, "wait for existing trans to finish"), 'SHUTDOWN_WAIT_UPDATES': ( SHUTDOWN_WAIT_UPDATES, "wait for existing updates to finish"), 'SHUTDOWN_WAIT_ALL_BUFFERS': ( SHUTDOWN_WAIT_ALL_BUFFERS, "flush InnoDB and other storage engine buffers"), 'SHUTDOWN_WAIT_CRITICAL_BUFFERS': ( SHUTDOWN_WAIT_CRITICAL_BUFFERS, "don't flush InnoDB buffers, " "flush other storage engines' buffers"), 'KILL_QUERY': ( KILL_QUERY, "(no description)"), 'KILL_CONNECTION': ( KILL_CONNECTION, "(no description)"), } class CharacterSet(_Constants): """MySQL supported character sets and collations List of character sets with their collations supported by MySQL. This maps to the character set we get from the server within the handshake packet. The list is hardcode so we avoid a database query when getting the name of the used character set or collation. """ desc = MYSQL_CHARACTER_SETS # Multi-byte character sets which use 5c (backslash) in characters slash_charsets = (1, 13, 28, 84, 87, 88) @classmethod def get_info(cls, setid): """Retrieves character set information as tuple using an ID Retrieves character set and collation information based on the given MySQL ID. Raises ProgrammingError when character set is not supported. Returns a tuple. """ try: return cls.desc[setid][0:2] except IndexError: raise ProgrammingError( "Character set '{0}' unsupported".format(setid)) @classmethod def get_desc(cls, setid): """Retrieves character set information as string using an ID Retrieves character set and collation information based on the given MySQL ID. Returns a tuple. """ try: return "%s/%s" % cls.get_info(setid) except: raise @classmethod def get_default_collation(cls, charset): """Retrieves the default collation for given character set Raises ProgrammingError when character set is not supported. Returns list (collation, charset, index) """ if isinstance(charset, int): try: info = cls.desc[charset] return info[1], info[0], charset except: ProgrammingError("Character set ID '%s' unsupported." % ( charset)) for cid, info in enumerate(cls.desc): if info is None: continue if info[0] == charset and info[2] is True: return info[1], info[0], cid raise ProgrammingError("Character set '%s' unsupported." % (charset)) @classmethod def get_charset_info(cls, charset=None, collation=None): """Get character set information using charset name and/or collation Retrieves character set and collation information given character set name and/or a collation name. If charset is an integer, it will look up the character set based on the MySQL's ID. For example: get_charset_info('utf8',None) get_charset_info(collation='utf8_general_ci') get_charset_info(47) Raises ProgrammingError when character set is not supported. Returns a tuple with (id, characterset name, collation) """ if isinstance(charset, int): try: info = cls.desc[charset] return (charset, info[0], info[1]) except IndexError: ProgrammingError("Character set ID {0} unknown.".format( charset)) if charset is not None and collation is None: info = cls.get_default_collation(charset) return (info[2], info[1], info[0]) elif charset is None and collation is not None: for cid, info in enumerate(cls.desc): if info is None: continue if collation == info[1]: return (cid, info[0], info[1]) raise ProgrammingError("Collation '{0}' unknown.".format(collation)) else: for cid, info in enumerate(cls.desc): if info is None: continue if info[0] == charset and info[1] == collation: return (cid, info[0], info[1]) raise ProgrammingError("Character set '{0}' unknown.".format( charset)) @classmethod def get_supported(cls): """Retrieves a list with names of all supproted character sets Returns a tuple. """ res = [] for info in cls.desc: if info and info[0] not in res: res.append(info[0]) return tuple(res) class SQLMode(_Constants): # pylint: disable=R0921 """MySQL SQL Modes The numeric values of SQL Modes are not interesting, only the names are used when setting the SQL_MODE system variable using the MySQL SET command. See http://dev.mysql.com/doc/refman/5.6/en/server-sql-mode.html """ _prefix = 'MODE_' REAL_AS_FLOAT = 'REAL_AS_FLOAT' PIPES_AS_CONCAT = 'PIPES_AS_CONCAT' ANSI_QUOTES = 'ANSI_QUOTES' IGNORE_SPACE = 'IGNORE_SPACE' NOT_USED = 'NOT_USED' ONLY_FULL_GROUP_BY = 'ONLY_FULL_GROUP_BY' NO_UNSIGNED_SUBTRACTION = 'NO_UNSIGNED_SUBTRACTION' NO_DIR_IN_CREATE = 'NO_DIR_IN_CREATE' POSTGRESQL = 'POSTGRESQL' ORACLE = 'ORACLE' MSSQL = 'MSSQL' DB2 = 'DB2' MAXDB = 'MAXDB' NO_KEY_OPTIONS = 'NO_KEY_OPTIONS' NO_TABLE_OPTIONS = 'NO_TABLE_OPTIONS' NO_FIELD_OPTIONS = 'NO_FIELD_OPTIONS' MYSQL323 = 'MYSQL323' MYSQL40 = 'MYSQL40' ANSI = 'ANSI' NO_AUTO_VALUE_ON_ZERO = 'NO_AUTO_VALUE_ON_ZERO' NO_BACKSLASH_ESCAPES = 'NO_BACKSLASH_ESCAPES' STRICT_TRANS_TABLES = 'STRICT_TRANS_TABLES' STRICT_ALL_TABLES = 'STRICT_ALL_TABLES' NO_ZERO_IN_DATE = 'NO_ZERO_IN_DATE' NO_ZERO_DATE = 'NO_ZERO_DATE' INVALID_DATES = 'INVALID_DATES' ERROR_FOR_DIVISION_BY_ZERO = 'ERROR_FOR_DIVISION_BY_ZERO' TRADITIONAL = 'TRADITIONAL' NO_AUTO_CREATE_USER = 'NO_AUTO_CREATE_USER' HIGH_NOT_PRECEDENCE = 'HIGH_NOT_PRECEDENCE' NO_ENGINE_SUBSTITUTION = 'NO_ENGINE_SUBSTITUTION' PAD_CHAR_TO_FULL_LENGTH = 'PAD_CHAR_TO_FULL_LENGTH' @classmethod def get_desc(cls, name): raise NotImplementedError @classmethod def get_info(cls, number): raise NotImplementedError @classmethod def get_full_info(cls): """Returns a sequence of all available SQL Modes This class method returns a tuple containing all SQL Mode names. The names will be alphabetically sorted. Returns a tuple. """ res = [] for key in vars(cls).keys(): if not key.startswith('_') \ and not hasattr(getattr(cls, key), '__call__'): res.append(key) return tuple(sorted(res)) mysql-utilities-1.6.4/mysql/connector/django/0000755001577100752670000000000012747674052021001 5ustar pb2usercommonmysql-utilities-1.6.4/mysql/connector/django/introspection.py0000644001577100752670000003214712717544565024264 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. import re from collections import namedtuple import django if django.VERSION >= (1, 8): from django.db.backends.base.introspection import ( BaseDatabaseIntrospection, FieldInfo, TableInfo ) else: from django.db.backends import BaseDatabaseIntrospection if django.VERSION >= (1, 6): if django.VERSION < (1, 8): from django.db.backends import FieldInfo from django.utils.encoding import force_text if django.VERSION >= (1, 7): from django.utils.datastructures import OrderedSet from mysql.connector.constants import FieldType foreign_key_re = re.compile(r"\sCONSTRAINT `[^`]*` FOREIGN KEY \(`([^`]*)`\) " r"REFERENCES `([^`]*)` \(`([^`]*)`\)") if django.VERSION >= (1, 8): FieldInfo = namedtuple('FieldInfo', FieldInfo._fields + ('extra',)) class DatabaseIntrospection(BaseDatabaseIntrospection): data_types_reverse = { FieldType.BLOB: 'TextField', FieldType.DECIMAL: 'DecimalField', FieldType.NEWDECIMAL: 'DecimalField', FieldType.DATE: 'DateField', FieldType.DATETIME: 'DateTimeField', FieldType.DOUBLE: 'FloatField', FieldType.FLOAT: 'FloatField', FieldType.INT24: 'IntegerField', FieldType.LONG: 'IntegerField', FieldType.LONGLONG: 'BigIntegerField', FieldType.SHORT: ( 'IntegerField' if django.VERSION < (1, 8) else 'SmallIntegerField' ), FieldType.STRING: 'CharField', FieldType.TIME: 'TimeField', FieldType.TIMESTAMP: 'DateTimeField', FieldType.TINY: 'IntegerField', FieldType.TINY_BLOB: 'TextField', FieldType.MEDIUM_BLOB: 'TextField', FieldType.LONG_BLOB: 'TextField', FieldType.VAR_STRING: 'CharField', } def get_field_type(self, data_type, description): field_type = super(DatabaseIntrospection, self).get_field_type( data_type, description) if (field_type == 'IntegerField' and 'auto_increment' in description.extra): return 'AutoField' return field_type def get_table_list(self, cursor): """Returns a list of table names in the current database.""" cursor.execute("SHOW FULL TABLES") if django.VERSION >= (1, 8): return [ TableInfo(row[0], {'BASE TABLE': 't', 'VIEW': 'v'}.get(row[1])) for row in cursor.fetchall() ] else: return [row[0] for row in cursor.fetchall()] if django.VERSION >= (1, 8): def get_table_description(self, cursor, table_name): """ Returns a description of the table, with the DB-API cursor.description interface." """ # - information_schema database gives more accurate results for # some figures: # - varchar length returned by cursor.description is an internal # length, not visible length (#5725) # - precision and scale (for decimal fields) (#5014) # - auto_increment is not available in cursor.description InfoLine = namedtuple( 'InfoLine', 'col_name data_type max_len num_prec num_scale extra' ) cursor.execute(""" SELECT column_name, data_type, character_maximum_length, numeric_precision, numeric_scale, extra FROM information_schema.columns WHERE table_name = %s AND table_schema = DATABASE()""", [table_name]) field_info = dict( (line[0], InfoLine(*line)) for line in cursor.fetchall() ) cursor.execute("SELECT * FROM %s LIMIT 1" % self.connection.ops.quote_name(table_name)) to_int = lambda i: int(i) if i is not None else i fields = [] for line in cursor.description: col_name = force_text(line[0]) fields.append( FieldInfo(*((col_name,) + line[1:3] + (to_int(field_info[col_name].max_len) or line[3], to_int(field_info[col_name].num_prec) or line[4], to_int(field_info[col_name].num_scale) or line[5]) + (line[6],) + (field_info[col_name].extra,))) ) return fields else: def get_table_description(self, cursor, table_name): """ Returns a description of the table, with the DB-API cursor.description interface. """ # varchar length returned by cursor.description is an internal # length not visible length (#5725), use information_schema database # to fix this cursor.execute( "SELECT column_name, character_maximum_length " "FROM INFORMATION_SCHEMA.COLUMNS " "WHERE table_name = %s AND table_schema = DATABASE() " "AND character_maximum_length IS NOT NULL", [table_name]) length_map = dict(cursor.fetchall()) # Also getting precision and scale from # information_schema (see #5014) cursor.execute( "SELECT column_name, numeric_precision, numeric_scale FROM " "INFORMATION_SCHEMA.COLUMNS WHERE table_name = %s AND " "table_schema = DATABASE() AND data_type='decimal'", [table_name]) numeric_map = dict((line[0], tuple([int(n) for n in line[1:]])) for line in cursor.fetchall()) cursor.execute("SELECT * FROM {0} LIMIT 1".format( self.connection.ops.quote_name(table_name))) if django.VERSION >= (1, 6): return [FieldInfo(*((force_text(line[0]),) + line[1:3] + (length_map.get(line[0], line[3]),) + numeric_map.get(line[0], line[4:6]) + (line[6],))) for line in cursor.description] else: return [ line[:3] + (length_map.get(line[0], line[3]),) + line[4:] for line in cursor.description ] def _name_to_index(self, cursor, table_name): """ Returns a dictionary of {field_name: field_index} for the given table. Indexes are 0-based. """ return dict((d[0], i) for i, d in enumerate( self.get_table_description(cursor, table_name))) def get_relations(self, cursor, table_name): """ Returns a dictionary of {field_index: (field_index_other_table, other_table)} representing all relationships to the given table. Indexes are 0-based. """ constraints = self.get_key_columns(cursor, table_name) relations = {} if django.VERSION >= (1, 8): for my_fieldname, other_table, other_field in constraints: relations[my_fieldname] = (other_field, other_table) return relations else: my_field_dict = self._name_to_index(cursor, table_name) for my_fieldname, other_table, other_field in constraints: other_field_index = self._name_to_index( cursor, other_table)[other_field] my_field_index = my_field_dict[my_fieldname] relations[my_field_index] = (other_field_index, other_table) return relations def get_key_columns(self, cursor, table_name): """ Returns a list of (column_name, referenced_table_name, referenced_column_name) for all key columns in given table. """ key_columns = [] cursor.execute( "SELECT column_name, referenced_table_name, referenced_column_name " "FROM information_schema.key_column_usage " "WHERE table_name = %s " "AND table_schema = DATABASE() " "AND referenced_table_name IS NOT NULL " "AND referenced_column_name IS NOT NULL", [table_name]) key_columns.extend(cursor.fetchall()) return key_columns def get_indexes(self, cursor, table_name): cursor.execute("SHOW INDEX FROM {0}" "".format(self.connection.ops.quote_name(table_name))) # Do a two-pass search for indexes: on first pass check which indexes # are multicolumn, on second pass check which single-column indexes # are present. rows = list(cursor.fetchall()) multicol_indexes = set() for row in rows: if row[3] > 1: multicol_indexes.add(row[2]) indexes = {} for row in rows: if row[2] in multicol_indexes: continue if row[4] not in indexes: indexes[row[4]] = {'primary_key': False, 'unique': False} # It's possible to have the unique and PK constraints in # separate indexes. if row[2] == 'PRIMARY': indexes[row[4]]['primary_key'] = True if not row[1]: indexes[row[4]]['unique'] = True return indexes def get_primary_key_column(self, cursor, table_name): """ Returns the name of the primary key column for the given table """ # Django 1.6 for column in self.get_indexes(cursor, table_name).items(): if column[1]['primary_key']: return column[0] return None def get_storage_engine(self, cursor, table_name): """ Retrieves the storage engine for a given table. Returns the default storage engine if the table doesn't exist. """ cursor.execute( "SELECT engine " "FROM information_schema.tables " "WHERE table_name = %s", [table_name]) result = cursor.fetchone() if not result: return self.connection.features.mysql_storage_engine return result[0] def get_constraints(self, cursor, table_name): """ Retrieves any constraints or keys (unique, pk, fk, check, index) across one or more columns. """ # Django 1.7 constraints = {} # Get the actual constraint names and columns name_query = ( "SELECT kc.`constraint_name`, kc.`column_name`, " "kc.`referenced_table_name`, kc.`referenced_column_name` " "FROM information_schema.key_column_usage AS kc " "WHERE " "kc.table_schema = %s AND " "kc.table_name = %s" ) cursor.execute(name_query, [self.connection.settings_dict['NAME'], table_name]) for constraint, column, ref_table, ref_column in cursor.fetchall(): if constraint not in constraints: constraints[constraint] = { 'columns': OrderedSet(), 'primary_key': False, 'unique': False, 'index': False, 'check': False, 'foreign_key': ( (ref_table, ref_column) if ref_column else None, ) } constraints[constraint]['columns'].add(column) # Now get the constraint types type_query = """ SELECT c.constraint_name, c.constraint_type FROM information_schema.table_constraints AS c WHERE c.table_schema = %s AND c.table_name = %s """ cursor.execute(type_query, [self.connection.settings_dict['NAME'], table_name]) for constraint, kind in cursor.fetchall(): if kind.lower() == "primary key": constraints[constraint]['primary_key'] = True constraints[constraint]['unique'] = True elif kind.lower() == "unique": constraints[constraint]['unique'] = True # Now add in the indexes cursor.execute("SHOW INDEX FROM %s" % self.connection.ops.quote_name( table_name)) for table, non_unique, index, colseq, column in [x[:5] for x in cursor.fetchall()]: if index not in constraints: constraints[index] = { 'columns': OrderedSet(), 'primary_key': False, 'unique': False, 'index': True, 'check': False, 'foreign_key': None, } constraints[index]['index'] = True constraints[index]['columns'].add(column) # Convert the sorted sets to lists for constraint in constraints.values(): constraint['columns'] = list(constraint['columns']) return constraints mysql-utilities-1.6.4/mysql/connector/django/validation.py0000644001577100752670000000505512717544565023514 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. import django if django.VERSION >= (1, 8): from django.db.backends.base.validation import BaseDatabaseValidation else: from django.db.backends import BaseDatabaseValidation if django.VERSION < (1, 7): from django.db import models else: from django.core import checks from django.db import connection class DatabaseValidation(BaseDatabaseValidation): if django.VERSION < (1, 7): def validate_field(self, errors, opts, f): """ MySQL has the following field length restriction: No character (varchar) fields can have a length exceeding 255 characters if they have a unique index on them. """ varchar_fields = (models.CharField, models.CommaSeparatedIntegerField, models.SlugField) if isinstance(f, varchar_fields) and f.max_length > 255 and f.unique: msg = ('"%(name)s": %(cls)s cannot have a "max_length" greater ' 'than 255 when using "unique=True".') errors.add(opts, msg % {'name': f.name, 'cls': f.__class__.__name__}) else: def check_field(self, field, **kwargs): """ MySQL has the following field length restriction: No character (varchar) fields can have a length exceeding 255 characters if they have a unique index on them. """ # Django 1.7 errors = super(DatabaseValidation, self).check_field(field, **kwargs) # Ignore any related fields. if getattr(field, 'rel', None) is None: field_type = field.db_type(connection) if field_type is None: return errors if (field_type.startswith('varchar') # Look for CharFields... and field.unique # ... that are unique and (field.max_length is None or int(field.max_length) > 255)): errors.append( checks.Error( ('MySQL does not allow unique CharFields to have a ' 'max_length > 255.'), hint=None, obj=field, id='mysql.E001', ) ) return errors mysql-utilities-1.6.4/mysql/connector/django/__init__.py0000644001577100752670000000000012717544565023102 0ustar pb2usercommonmysql-utilities-1.6.4/mysql/connector/django/schema.py0000644001577100752670000000706212717544565022622 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # New file added for Django 1.7 import django if django.VERSION >= (1, 8): from django.db.backends.base.schema import BaseDatabaseSchemaEditor else: from django.db.backends.schema import BaseDatabaseSchemaEditor from django.db.models import NOT_PROVIDED class DatabaseSchemaEditor(BaseDatabaseSchemaEditor): sql_rename_table = "RENAME TABLE %(old_table)s TO %(new_table)s" sql_alter_column_null = "MODIFY %(column)s %(type)s NULL" sql_alter_column_not_null = "MODIFY %(column)s %(type)s NOT NULL" sql_alter_column_type = "MODIFY %(column)s %(type)s" sql_rename_column = "ALTER TABLE %(table)s CHANGE %(old_column)s " \ "%(new_column)s %(type)s" sql_delete_unique = "ALTER TABLE %(table)s DROP INDEX %(name)s" sql_create_fk = "ALTER TABLE %(table)s ADD CONSTRAINT %(name)s FOREIGN " \ "KEY (%(column)s) REFERENCES %(to_table)s (%(to_column)s)" sql_delete_fk = "ALTER TABLE %(table)s DROP FOREIGN KEY %(name)s" sql_delete_index = "DROP INDEX %(name)s ON %(table)s" alter_string_set_null = 'MODIFY %(column)s %(type)s NULL;' alter_string_drop_null = 'MODIFY %(column)s %(type)s NOT NULL;' sql_create_pk = "ALTER TABLE %(table)s ADD CONSTRAINT %(name)s " \ "PRIMARY KEY (%(columns)s)" sql_delete_pk = "ALTER TABLE %(table)s DROP PRIMARY KEY" def quote_value(self, value): # Inner import to allow module to fail to load gracefully from mysql.connector.conversion import MySQLConverter return MySQLConverter.quote(MySQLConverter.escape(value)) def skip_default(self, field): """ MySQL doesn't accept default values for longtext and longblob and implicitly treats these columns as nullable. """ return field.db_type(self.connection) in ('longtext', 'longblob') def add_field(self, model, field): super(DatabaseSchemaEditor, self).add_field(model, field) # Simulate the effect of a one-off default. if (self.skip_default(field) and field.default not in (None, NOT_PROVIDED)): effective_default = self.effective_default(field) self.execute('UPDATE %(table)s SET %(column)s = %%s' % { 'table': self.quote_name(model._meta.db_table), 'column': self.quote_name(field.column), }, [effective_default]) def _model_indexes_sql(self, model): # New in Django 1.8 storage = self.connection.introspection.get_storage_engine( self.connection.cursor(), model._meta.db_table ) if storage == "InnoDB": for field in model._meta.local_fields: if (field.db_index and not field.unique and field.get_internal_type() == "ForeignKey"): # Temporary setting db_index to False (in memory) to # disable index creation for FKs (index automatically # created by MySQL) field.db_index = False return super(DatabaseSchemaEditor, self)._model_indexes_sql(model) def _alter_column_type_sql(self, table, old_field, new_field, new_type): # New in Django 1.8 # Keep null property of old field, if it has changed, it will be # handled separately if old_field.null: new_type += " NULL" else: new_type += " NOT NULL" return super(DatabaseSchemaEditor, self)._alter_column_type_sql( table, old_field, new_field, new_type) mysql-utilities-1.6.4/mysql/connector/django/client.py0000644001577100752670000000353212717544565022636 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. import django import subprocess if django.VERSION >= (1, 8): from django.db.backends.base.client import BaseDatabaseClient else: from django.db.backends import BaseDatabaseClient class DatabaseClient(BaseDatabaseClient): executable_name = 'mysql' @classmethod def settings_to_cmd_args(cls, settings_dict): args = [cls.executable_name] db = settings_dict['OPTIONS'].get('database', settings_dict['NAME']) user = settings_dict['OPTIONS'].get('user', settings_dict['USER']) passwd = settings_dict['OPTIONS'].get('password', settings_dict['PASSWORD']) host = settings_dict['OPTIONS'].get('host', settings_dict['HOST']) port = settings_dict['OPTIONS'].get('port', settings_dict['PORT']) defaults_file = settings_dict['OPTIONS'].get('read_default_file') # --defaults-file should always be the first option if defaults_file: args.append("--defaults-file={0}".format(defaults_file)) # We force SQL_MODE to TRADITIONAL args.append("--init-command=SET @@session.SQL_MODE=TRADITIONAL") if user: args.append("--user={0}".format(user)) if passwd: args.append("--password={0}".format(passwd)) if host: if '/' in host: args.append("--socket={0}".format(host)) else: args.append("--host={0}".format(host)) if port: args.append("--port={0}".format(port)) if db: args.append("--database={0}".format(db)) return args def runshell(self): args = DatabaseClient.settings_to_cmd_args( self.connection.settings_dict) subprocess.call(args) mysql-utilities-1.6.4/mysql/connector/django/operations.py0000644001577100752670000002702112717544565023542 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # New file added for Django 1.8 from __future__ import unicode_literals import uuid import django from django.conf import settings if django.VERSION >= (1, 8): from django.db.backends.base.operations import BaseDatabaseOperations else: from django.db.backends import BaseDatabaseOperations from django.utils import six, timezone from django.utils.encoding import force_text try: from _mysql_connector import datetime_to_mysql, time_to_mysql except ImportError: HAVE_CEXT = False else: HAVE_CEXT = True class DatabaseOperations(BaseDatabaseOperations): compiler_module = "mysql.connector.django.compiler" # MySQL stores positive fields as UNSIGNED ints. if django.VERSION >= (1, 7): integer_field_ranges = dict(BaseDatabaseOperations.integer_field_ranges, PositiveSmallIntegerField=(0, 4294967295), PositiveIntegerField=( 0, 18446744073709551615),) def date_extract_sql(self, lookup_type, field_name): # http://dev.mysql.com/doc/mysql/en/date-and-time-functions.html if lookup_type == 'week_day': # DAYOFWEEK() returns an integer, 1-7, Sunday=1. # Note: WEEKDAY() returns 0-6, Monday=0. return "DAYOFWEEK({0})".format(field_name) else: return "EXTRACT({0} FROM {1})".format( lookup_type.upper(), field_name) def date_trunc_sql(self, lookup_type, field_name): """Returns SQL simulating DATE_TRUNC This function uses MySQL functions DATE_FORMAT and CAST to simulate DATE_TRUNC. The field_name is returned when lookup_type is not supported. """ fields = ['year', 'month', 'day', 'hour', 'minute', 'second'] format = ('%Y-', '%m', '-%d', ' %H:', '%i', ':%S') format_def = ('0000-', '01', '-01', ' 00:', '00', ':00') try: i = fields.index(lookup_type) + 1 except ValueError: # Wrong lookup type, just return the value from MySQL as-is sql = field_name else: format_str = ''.join([f for f in format[:i]] + [f for f in format_def[i:]]) sql = "CAST(DATE_FORMAT({0}, '{1}') AS DATETIME)".format( field_name, format_str) return sql def datetime_extract_sql(self, lookup_type, field_name, tzname): # Django 1.6 if settings.USE_TZ: field_name = "CONVERT_TZ({0}, 'UTC', %s)".format(field_name) params = [tzname] else: params = [] # http://dev.mysql.com/doc/mysql/en/date-and-time-functions.html if lookup_type == 'week_day': # DAYOFWEEK() returns an integer, 1-7, Sunday=1. # Note: WEEKDAY() returns 0-6, Monday=0. sql = "DAYOFWEEK({0})".format(field_name) else: sql = "EXTRACT({0} FROM {1})".format(lookup_type.upper(), field_name) return sql, params def datetime_trunc_sql(self, lookup_type, field_name, tzname): # Django 1.6 if settings.USE_TZ: field_name = "CONVERT_TZ({0}, 'UTC', %s)".format(field_name) params = [tzname] else: params = [] fields = ['year', 'month', 'day', 'hour', 'minute', 'second'] format_ = ('%Y-', '%m', '-%d', ' %H:', '%i', ':%S') format_def = ('0000-', '01', '-01', ' 00:', '00', ':00') try: i = fields.index(lookup_type) + 1 except ValueError: sql = field_name else: format_str = ''.join([f for f in format_[:i]] + [f for f in format_def[i:]]) sql = "CAST(DATE_FORMAT({0}, '{1}') AS DATETIME)".format( field_name, format_str) return sql, params if django.VERSION >= (1, 8): def date_interval_sql(self, timedelta): """Returns SQL for calculating date/time intervals """ return "INTERVAL '%d 0:0:%d:%d' DAY_MICROSECOND" % ( timedelta.days, timedelta.seconds, timedelta.microseconds), [] else: def date_interval_sql(self, sql, connector, timedelta): """Returns SQL for calculating date/time intervals """ fmt = ( "({sql} {connector} INTERVAL '{days} " "0:0:{secs}:{msecs}' DAY_MICROSECOND)" ) return fmt.format( sql=sql, connector=connector, days=timedelta.days, secs=timedelta.seconds, msecs=timedelta.microseconds ) def format_for_duration_arithmetic(self, sql): if self.connection.features.supports_microsecond_precision: return 'INTERVAL %s MICROSECOND' % sql else: return 'INTERVAL FLOOR(%s / 1000000) SECOND' % sql def drop_foreignkey_sql(self): return "DROP FOREIGN KEY" def force_no_ordering(self): """ "ORDER BY NULL" prevents MySQL from implicitly ordering by grouped columns. If no ordering would otherwise be applied, we don't want any implicit sorting going on. """ if django.VERSION >= (1, 8): return [(None, ("NULL", [], False))] else: return ["NULL"] def fulltext_search_sql(self, field_name): return 'MATCH ({0}) AGAINST (%s IN BOOLEAN MODE)'.format(field_name) def last_executed_query(self, cursor, sql, params): return force_text(cursor.statement, errors='replace') def no_limit_value(self): # 2**64 - 1, as recommended by the MySQL documentation return 18446744073709551615 def quote_name(self, name): if name.startswith("`") and name.endswith("`"): return name # Quoting once is enough. return "`{0}`".format(name) def random_function_sql(self): return 'RAND()' def sql_flush(self, style, tables, sequences, allow_cascade=False): if tables: sql = ['SET FOREIGN_KEY_CHECKS = 0;'] for table in tables: sql.append('{keyword} {table};'.format( keyword=style.SQL_KEYWORD('TRUNCATE'), table=style.SQL_FIELD(self.quote_name(table)))) sql.append('SET FOREIGN_KEY_CHECKS = 1;') sql.extend(self.sequence_reset_by_name_sql(style, sequences)) return sql else: return [] def validate_autopk_value(self, value): # MySQLism: zero in AUTO_INCREMENT field does not work. Refs #17653. if value == 0: raise ValueError('The database backend does not accept 0 as a ' 'value for AutoField.') return value def value_to_db_datetime(self, value): if value is None: return None # MySQL doesn't support tz-aware times if timezone.is_aware(value): if settings.USE_TZ: value = value.astimezone(timezone.utc).replace(tzinfo=None) else: raise ValueError( "MySQL backend does not support timezone-aware times." ) if not self.connection.features.supports_microsecond_precision: value = value.replace(microsecond=0) if not self.connection.use_pure: return datetime_to_mysql(value) return self.connection.converter.to_mysql(value) def value_to_db_time(self, value): if value is None: return None # MySQL doesn't support tz-aware times if timezone.is_aware(value): raise ValueError("MySQL backend does not support timezone-aware " "times.") if not self.connection.use_pure: return time_to_mysql(value) return self.connection.converter.to_mysql(value) def max_name_length(self): return 64 def bulk_insert_sql(self, fields, num_values): items_sql = "({0})".format(", ".join(["%s"] * len(fields))) return "VALUES " + ", ".join([items_sql] * num_values) if django.VERSION < (1, 8): def year_lookup_bounds(self, value): # Again, no microseconds first = '{0}-01-01 00:00:00' second = '{0}-12-31 23:59:59.999999' return [first.format(value), second.format(value)] def year_lookup_bounds_for_datetime_field(self, value): # Django 1.6 # Again, no microseconds first, second = super(DatabaseOperations, self).year_lookup_bounds_for_datetime_field(value) if self.connection.mysql_version >= (5, 6, 4): return [first.replace(microsecond=0), second] else: return [first.replace(microsecond=0), second.replace(microsecond=0)] def sequence_reset_by_name_sql(self, style, sequences): # Truncate already resets the AUTO_INCREMENT field from # MySQL version 5.0.13 onwards. Refs #16961. res = [] if self.connection.mysql_version < (5, 0, 13): fmt = "{alter} {table} {{tablename}} {auto_inc} {field};".format( alter=style.SQL_KEYWORD('ALTER'), table=style.SQL_KEYWORD('TABLE'), auto_inc=style.SQL_KEYWORD('AUTO_INCREMENT'), field=style.SQL_FIELD('= 1') ) for sequence in sequences: tablename = style.SQL_TABLE(self.quote_name(sequence['table'])) res.append(fmt.format(tablename=tablename)) return res return res def savepoint_create_sql(self, sid): return "SAVEPOINT {0}".format(sid) def savepoint_commit_sql(self, sid): return "RELEASE SAVEPOINT {0}".format(sid) def savepoint_rollback_sql(self, sid): return "ROLLBACK TO SAVEPOINT {0}".format(sid) def combine_expression(self, connector, sub_expressions): """ MySQL requires special cases for ^ operators in query expressions """ if connector == '^': return 'POW(%s)' % ','.join(sub_expressions) return super(DatabaseOperations, self).combine_expression( connector, sub_expressions) def get_db_converters(self, expression): # New in Django 1.8 converters = super(DatabaseOperations, self).get_db_converters( expression) internal_type = expression.output_field.get_internal_type() if internal_type in ['BooleanField', 'NullBooleanField']: converters.append(self.convert_booleanfield_value) if internal_type == 'UUIDField': converters.append(self.convert_uuidfield_value) if internal_type == 'TextField': converters.append(self.convert_textfield_value) return converters def convert_booleanfield_value(self, value, expression, connection, context): # New in Django 1.8 if value in (0, 1): value = bool(value) return value def convert_uuidfield_value(self, value, expression, connection, context): # New in Django 1.8 if value is not None: value = uuid.UUID(value) return value def convert_textfield_value(self, value, expression, connection, context): # New in Django 1.8 if value is not None: value = force_text(value) return value mysql-utilities-1.6.4/mysql/connector/django/compiler.py0000644001577100752670000000372212717544565023173 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. import django from django.db.models.sql import compiler from django.utils.six.moves import zip_longest class SQLCompiler(compiler.SQLCompiler): def resolve_columns(self, row, fields=()): values = [] index_extra_select = len(self.query.extra_select) bool_fields = ("BooleanField", "NullBooleanField") for value, field in zip_longest(row[index_extra_select:], fields): if (field and field.get_internal_type() in bool_fields and value in (0, 1)): value = bool(value) values.append(value) return row[:index_extra_select] + tuple(values) if django.VERSION >= (1, 8): def as_subquery_condition(self, alias, columns, compiler): qn = compiler.quote_name_unless_alias qn2 = self.connection.ops.quote_name sql, params = self.as_sql() return '(%s) IN (%s)' % (', '.join('%s.%s' % (qn(alias), qn2(column)) for column in columns), sql), params else: def as_subquery_condition(self, alias, columns, qn): # Django 1.6 qn2 = self.connection.ops.quote_name sql, params = self.as_sql() column_list = ', '.join( ['%s.%s' % (qn(alias), qn2(column)) for column in columns]) return '({0}) IN ({1})'.format(column_list, sql), params class SQLInsertCompiler(compiler.SQLInsertCompiler, SQLCompiler): pass class SQLDeleteCompiler(compiler.SQLDeleteCompiler, SQLCompiler): pass class SQLUpdateCompiler(compiler.SQLUpdateCompiler, SQLCompiler): pass class SQLAggregateCompiler(compiler.SQLAggregateCompiler, SQLCompiler): pass if django.VERSION < (1, 8): class SQLDateCompiler(compiler.SQLDateCompiler, SQLCompiler): pass if django.VERSION >= (1, 6): class SQLDateTimeCompiler(compiler.SQLDateTimeCompiler, SQLCompiler): # Django 1.6 pass mysql-utilities-1.6.4/mysql/connector/django/features.py0000644001577100752670000001035312717544565023175 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # New file added for Django 1.8 import django if django.VERSION >= (1, 8): from django.db.backends.base.features import BaseDatabaseFeatures else: from django.db.backends import BaseDatabaseFeatures from django.utils.functional import cached_property from django.utils import six try: import pytz HAVE_PYTZ = True except ImportError: HAVE_PYTZ = False class DatabaseFeatures(BaseDatabaseFeatures): """Features specific to MySQL Microsecond precision is supported since MySQL 5.6.3 and turned on by default if this MySQL version is used. """ empty_fetchmany_value = [] update_can_self_select = False allows_group_by_pk = True related_fields_match_type = True allow_sliced_subqueries = False has_bulk_insert = True has_select_for_update = True has_select_for_update_nowait = False supports_forward_references = False supports_regex_backreferencing = False supports_date_lookup_using_string = False can_introspect_autofield = True can_introspect_binary_field = False can_introspect_small_integer_field = True supports_timezones = False requires_explicit_null_ordering_when_grouping = True allows_auto_pk_0 = False allows_primary_key_0 = False uses_savepoints = True atomic_transactions = False supports_column_check_constraints = False if django.VERSION < (1, 8): supports_long_model_names = False supports_binary_field = six.PY2 can_introspect_boolean_field = False def __init__(self, connection): super(DatabaseFeatures, self).__init__(connection) @cached_property def supports_microsecond_precision(self): if self.connection.mysql_version >= (5, 6, 3): return True return False @cached_property def mysql_storage_engine(self): """Get default storage engine of MySQL This method creates a table without ENGINE table option and inspects which engine was used. Used by Django tests. """ tblname = 'INTROSPECT_TEST' droptable = 'DROP TABLE IF EXISTS {table}'.format(table=tblname) with self.connection.cursor() as cursor: cursor.execute(droptable) cursor.execute('CREATE TABLE {table} (X INT)'.format(table=tblname)) if self.connection.mysql_version >= (5, 0, 0): cursor.execute( "SELECT ENGINE FROM INFORMATION_SCHEMA.TABLES " "WHERE TABLE_SCHEMA = %s AND TABLE_NAME = %s", (self.connection.settings_dict['NAME'], tblname)) engine = cursor.fetchone()[0] else: # Very old MySQL servers.. cursor.execute("SHOW TABLE STATUS WHERE Name='{table}'".format( table=tblname)) engine = cursor.fetchone()[1] cursor.execute(droptable) self._cached_storage_engine = engine return engine @cached_property def _disabled_supports_transactions(self): return self.mysql_storage_engine == 'InnoDB' @cached_property def can_introspect_foreign_keys(self): """Confirm support for introspected foreign keys Only the InnoDB storage engine supports Foreigen Key (not taking into account MySQL Cluster here). """ return self.mysql_storage_engine == 'InnoDB' @cached_property def has_zoneinfo_database(self): """Tests if the time zone definitions are installed MySQL accepts full time zones names (eg. Africa/Nairobi) but rejects abbreviations (eg. EAT). When pytz isn't installed and the current time zone is LocalTimezone (the only sensible value in this context), the current time zone name will be an abbreviation. As a consequence, MySQL cannot perform time zone conversions reliably. """ # Django 1.6 if not HAVE_PYTZ: return False with self.connection.cursor() as cursor: cursor.execute("SELECT 1 FROM mysql.time_zone LIMIT 1") return cursor.fetchall() != [] def introspected_boolean_field_type(self, *args, **kwargs): # New in Django 1.8 return 'IntegerField' mysql-utilities-1.6.4/mysql/connector/django/creation.py0000644001577100752670000001321112717544565023157 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. import django from django.db import models if django.VERSION >= (1, 8): from django.db.backends.base.creation import BaseDatabaseCreation else: from django.db.backends.creation import BaseDatabaseCreation if django.VERSION < (1, 7): from django.db.backends.util import truncate_name else: from django.db.backends.utils import truncate_name class DatabaseCreation(BaseDatabaseCreation): """Maps Django Field object with MySQL data types """ def __init__(self, connection): super(DatabaseCreation, self).__init__(connection) if django.VERSION < (1, 8): self.data_types = { 'AutoField': 'integer AUTO_INCREMENT', 'BinaryField': 'longblob', 'BooleanField': 'bool', 'CharField': 'varchar(%(max_length)s)', 'CommaSeparatedIntegerField': 'varchar(%(max_length)s)', 'DateField': 'date', 'DateTimeField': 'datetime', # ms support set later 'DecimalField': 'numeric(%(max_digits)s, %(decimal_places)s)', 'FileField': 'varchar(%(max_length)s)', 'FilePathField': 'varchar(%(max_length)s)', 'FloatField': 'double precision', 'IntegerField': 'integer', 'BigIntegerField': 'bigint', 'IPAddressField': 'char(15)', 'GenericIPAddressField': 'char(39)', 'NullBooleanField': 'bool', 'OneToOneField': 'integer', 'PositiveIntegerField': 'integer UNSIGNED', 'PositiveSmallIntegerField': 'smallint UNSIGNED', 'SlugField': 'varchar(%(max_length)s)', 'SmallIntegerField': 'smallint', 'TextField': 'longtext', 'TimeField': 'time', # ms support set later } # Support for microseconds if self.connection.mysql_version >= (5, 6, 4): self.data_types.update({ 'DateTimeField': 'datetime(6)', 'TimeField': 'time(6)', }) def sql_table_creation_suffix(self): suffix = [] if django.VERSION < (1, 7): if self.connection.settings_dict['TEST_CHARSET']: suffix.append('CHARACTER SET {0}'.format( self.connection.settings_dict['TEST_CHARSET'])) if self.connection.settings_dict['TEST_COLLATION']: suffix.append('COLLATE {0}'.format( self.connection.settings_dict['TEST_COLLATION'])) else: test_settings = self.connection.settings_dict['TEST'] if test_settings['CHARSET']: suffix.append('CHARACTER SET %s' % test_settings['CHARSET']) if test_settings['COLLATION']: suffix.append('COLLATE %s' % test_settings['COLLATION']) return ' '.join(suffix) if django.VERSION < (1, 6): def sql_for_inline_foreign_key_references(self, field, known_models, style): "All inline references are pending under MySQL" return [], True else: def sql_for_inline_foreign_key_references(self, model, field, known_models, style): "All inline references are pending under MySQL" return [], True def sql_for_inline_many_to_many_references(self, model, field, style): opts = model._meta qn = self.connection.ops.quote_name columndef = ' {column} {type} {options},' table_output = [ columndef.format( column=style.SQL_FIELD(qn(field.m2m_column_name())), type=style.SQL_COLTYPE(models.ForeignKey(model).db_type( connection=self.connection)), options=style.SQL_KEYWORD('NOT NULL') ), columndef.format( column=style.SQL_FIELD(qn(field.m2m_reverse_name())), type=style.SQL_COLTYPE(models.ForeignKey(field.rel.to).db_type( connection=self.connection)), options=style.SQL_KEYWORD('NOT NULL') ), ] deferred = [ (field.m2m_db_table(), field.m2m_column_name(), opts.db_table, opts.pk.column), (field.m2m_db_table(), field.m2m_reverse_name(), field.rel.to._meta.db_table, field.rel.to._meta.pk.column) ] return table_output, deferred def sql_destroy_indexes_for_fields(self, model, fields, style): # Django 1.6 if len(fields) == 1 and fields[0].db_tablespace: tablespace_sql = self.connection.ops.tablespace_sql( fields[0].db_tablespace) elif model._meta.db_tablespace: tablespace_sql = self.connection.ops.tablespace_sql( model._meta.db_tablespace) else: tablespace_sql = "" if tablespace_sql: tablespace_sql = " " + tablespace_sql field_names = [] qn = self.connection.ops.quote_name for f in fields: field_names.append(style.SQL_FIELD(qn(f.column))) index_name = "{0}_{1}".format(model._meta.db_table, self._digest([f.name for f in fields])) return [ style.SQL_KEYWORD("DROP INDEX") + " " + style.SQL_TABLE(qn(truncate_name(index_name, self.connection.ops.max_name_length()))) + " " + style.SQL_KEYWORD("ON") + " " + style.SQL_TABLE(qn(model._meta.db_table)) + ";", ] mysql-utilities-1.6.4/mysql/connector/django/base.py0000644001577100752670000005021612717544565022273 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. """Django database Backend using MySQL Connector/Python This Django database backend is heavily based on the MySQL backend coming with Django. Changes include: * Support for microseconds (MySQL 5.6.3 and later) * Using INFORMATION_SCHEMA where possible * Using new defaults for, for example SQL_AUTO_IS_NULL Requires and comes with MySQL Connector/Python v1.1 and later: http://dev.mysql.com/downloads/connector/python/ """ from __future__ import unicode_literals from datetime import datetime import sys import warnings import django from django.utils.functional import cached_property try: import mysql.connector from mysql.connector.conversion import MySQLConverter, MySQLConverterBase from mysql.connector.catch23 import PY2 except ImportError as err: from django.core.exceptions import ImproperlyConfigured raise ImproperlyConfigured( "Error loading mysql.connector module: {0}".format(err)) try: version = mysql.connector.__version_info__[0:3] except AttributeError: from mysql.connector.version import VERSION version = VERSION[0:3] try: from _mysql_connector import datetime_to_mysql, time_to_mysql except ImportError: HAVE_CEXT = False else: HAVE_CEXT = True if version < (1, 1): from django.core.exceptions import ImproperlyConfigured raise ImproperlyConfigured( "MySQL Connector/Python v1.1.0 or newer " "is required; you have %s" % mysql.connector.__version__) from django.db import utils if django.VERSION < (1, 7): from django.db.backends import util else: from django.db.backends import utils as backend_utils if django.VERSION >= (1, 8): from django.db.backends.base.base import BaseDatabaseWrapper else: from django.db.backends import BaseDatabaseWrapper from django.db.backends.signals import connection_created from django.utils import (six, timezone, dateparse) from django.conf import settings from mysql.connector.django.client import DatabaseClient from mysql.connector.django.creation import DatabaseCreation from mysql.connector.django.introspection import DatabaseIntrospection from mysql.connector.django.validation import DatabaseValidation from mysql.connector.django.features import DatabaseFeatures from mysql.connector.django.operations import DatabaseOperations if django.VERSION >= (1, 7): from mysql.connector.django.schema import DatabaseSchemaEditor DatabaseError = mysql.connector.DatabaseError IntegrityError = mysql.connector.IntegrityError NotSupportedError = mysql.connector.NotSupportedError def adapt_datetime_with_timezone_support(value): # Equivalent to DateTimeField.get_db_prep_value. Used only by raw SQL. if settings.USE_TZ: if timezone.is_naive(value): warnings.warn("MySQL received a naive datetime (%s)" " while time zone support is active." % value, RuntimeWarning) default_timezone = timezone.get_default_timezone() value = timezone.make_aware(value, default_timezone) value = value.astimezone(timezone.utc).replace(tzinfo=None) if HAVE_CEXT: return datetime_to_mysql(value) else: return value.strftime("%Y-%m-%d %H:%M:%S.%f") class DjangoMySQLConverter(MySQLConverter): """Custom converter for Django for MySQLConnection""" def _TIME_to_python(self, value, dsc=None): """Return MySQL TIME data type as datetime.time() Returns datetime.time() """ return dateparse.parse_time(value.decode('utf-8')) def _DATETIME_to_python(self, value, dsc=None): """Connector/Python always returns naive datetime.datetime Connector/Python always returns naive timestamps since MySQL has no time zone support. Since Django needs non-naive, we need to add the UTC time zone. Returns datetime.datetime() """ if not value: return None dt = MySQLConverter._DATETIME_to_python(self, value) if dt is None: return None if settings.USE_TZ and timezone.is_naive(dt): dt = dt.replace(tzinfo=timezone.utc) return dt def _safetext_to_mysql(self, value): if PY2: return self._unicode_to_mysql(value) else: return self._str_to_mysql(value) def _safebytes_to_mysql(self, value): return self._bytes_to_mysql(value) class DjangoCMySQLConverter(MySQLConverterBase): """Custom converter for Django for CMySQLConnection""" def _TIME_to_python(self, value, dsc=None): """Return MySQL TIME data type as datetime.time() Returns datetime.time() """ return dateparse.parse_time(str(value)) def _DATETIME_to_python(self, value, dsc=None): """Connector/Python always returns naive datetime.datetime Connector/Python always returns naive timestamps since MySQL has no time zone support. Since Django needs non-naive, we need to add the UTC time zone. Returns datetime.datetime() """ if not value: return None if settings.USE_TZ and timezone.is_naive(value): value = value.replace(tzinfo=timezone.utc) return value class CursorWrapper(object): """Wrapper around MySQL Connector/Python's cursor class. The cursor class is defined by the options passed to MySQL Connector/Python. If buffered option is True in those options, MySQLCursorBuffered will be used. """ codes_for_integrityerror = (1048,) def __init__(self, cursor): self.cursor = cursor def _execute_wrapper(self, method, query, args): """Wrapper around execute() and executemany()""" try: return method(query, args) except (mysql.connector.ProgrammingError) as err: six.reraise(utils.ProgrammingError, utils.ProgrammingError(err.msg), sys.exc_info()[2]) except (mysql.connector.IntegrityError) as err: six.reraise(utils.IntegrityError, utils.IntegrityError(err.msg), sys.exc_info()[2]) except mysql.connector.OperationalError as err: # Map some error codes to IntegrityError, since they seem to be # misclassified and Django would prefer the more logical place. if err.args[0] in self.codes_for_integrityerror: six.reraise(utils.IntegrityError, utils.IntegrityError(err.msg), sys.exc_info()[2]) else: six.reraise(utils.DatabaseError, utils.DatabaseError(err.msg), sys.exc_info()[2]) except mysql.connector.DatabaseError as err: six.reraise(utils.DatabaseError, utils.DatabaseError(err.msg), sys.exc_info()[2]) def _adapt_execute_args_dict(self, args): if not args: return args new_args = dict(args) for key, value in args.items(): if isinstance(value, datetime): new_args[key] = adapt_datetime_with_timezone_support(value) return new_args def _adapt_execute_args(self, args): if not args: return args new_args = list(args) for i, arg in enumerate(args): if isinstance(arg, datetime): new_args[i] = adapt_datetime_with_timezone_support(arg) return tuple(new_args) def execute(self, query, args=None): """Executes the given operation This wrapper method around the execute()-method of the cursor is mainly needed to re-raise using different exceptions. """ if isinstance(args, dict): new_args = self._adapt_execute_args_dict(args) else: new_args = self._adapt_execute_args(args) return self._execute_wrapper(self.cursor.execute, query, new_args) def executemany(self, query, args): """Executes the given operation This wrapper method around the executemany()-method of the cursor is mainly needed to re-raise using different exceptions. """ return self._execute_wrapper(self.cursor.executemany, query, args) def __getattr__(self, attr): """Return attribute of wrapped cursor""" return getattr(self.cursor, attr) def __iter__(self): """Returns iterator over wrapped cursor""" return iter(self.cursor) def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_traceback): self.close() class DatabaseWrapper(BaseDatabaseWrapper): vendor = 'mysql' # This dictionary maps Field objects to their associated MySQL column # types, as strings. Column-type strings can contain format strings; they'll # be interpolated against the values of Field.__dict__ before being output. # If a column type is set to None, it won't be included in the output. # Moved from DatabaseCreation class in Django v1.8 _data_types = { 'AutoField': 'integer AUTO_INCREMENT', 'BinaryField': 'longblob', 'BooleanField': 'bool', 'CharField': 'varchar(%(max_length)s)', 'CommaSeparatedIntegerField': 'varchar(%(max_length)s)', 'DateField': 'date', 'DateTimeField': 'datetime', 'DecimalField': 'numeric(%(max_digits)s, %(decimal_places)s)', 'DurationField': 'bigint', 'FileField': 'varchar(%(max_length)s)', 'FilePathField': 'varchar(%(max_length)s)', 'FloatField': 'double precision', 'IntegerField': 'integer', 'BigIntegerField': 'bigint', 'IPAddressField': 'char(15)', 'GenericIPAddressField': 'char(39)', 'NullBooleanField': 'bool', 'OneToOneField': 'integer', 'PositiveIntegerField': 'integer UNSIGNED', 'PositiveSmallIntegerField': 'smallint UNSIGNED', 'SlugField': 'varchar(%(max_length)s)', 'SmallIntegerField': 'smallint', 'TextField': 'longtext', 'TimeField': 'time', 'UUIDField': 'char(32)', } @cached_property def data_types(self): if self.features.supports_microsecond_precision: return dict(self._data_types, DateTimeField='datetime(6)', TimeField='time(6)') else: return self._data_types operators = { 'exact': '= %s', 'iexact': 'LIKE %s', 'contains': 'LIKE BINARY %s', 'icontains': 'LIKE %s', 'regex': 'REGEXP BINARY %s', 'iregex': 'REGEXP %s', 'gt': '> %s', 'gte': '>= %s', 'lt': '< %s', 'lte': '<= %s', 'startswith': 'LIKE BINARY %s', 'endswith': 'LIKE BINARY %s', 'istartswith': 'LIKE %s', 'iendswith': 'LIKE %s', } # The patterns below are used to generate SQL pattern lookup clauses when # the right-hand side of the lookup isn't a raw string (it might be an # expression or the result of a bilateral transformation). # In those cases, special characters for LIKE operators (e.g. \, *, _) # should be escaped on database side. # # Note: we use str.format() here for readability as '%' is used as a # wildcard for the LIKE operator. pattern_esc = (r"REPLACE(REPLACE(REPLACE({}, '\\', '\\\\')," r" '%%', '\%%'), '_', '\_')") pattern_ops = { 'contains': "LIKE BINARY CONCAT('%%', {}, '%%')", 'icontains': "LIKE CONCAT('%%', {}, '%%')", 'startswith': "LIKE BINARY CONCAT({}, '%%')", 'istartswith': "LIKE CONCAT({}, '%%')", 'endswith': "LIKE BINARY CONCAT('%%', {})", 'iendswith': "LIKE CONCAT('%%', {})", } SchemaEditorClass = DatabaseSchemaEditor Database = mysql.connector def __init__(self, *args, **kwargs): super(DatabaseWrapper, self).__init__(*args, **kwargs) try: self._use_pure = self.settings_dict['OPTIONS']['use_pure'] except KeyError: self._use_pure = True if not self.use_pure: self.converter = DjangoCMySQLConverter() else: self.converter = DjangoMySQLConverter() self.ops = DatabaseOperations(self) self.features = DatabaseFeatures(self) self.client = DatabaseClient(self) self.creation = DatabaseCreation(self) self.introspection = DatabaseIntrospection(self) self.validation = DatabaseValidation(self) def _valid_connection(self): if self.connection: return self.connection.is_connected() return False def get_connection_params(self): # Django 1.6 kwargs = { 'charset': 'utf8', 'use_unicode': True, 'buffered': False, 'consume_results': True, } settings_dict = self.settings_dict if settings_dict['USER']: kwargs['user'] = settings_dict['USER'] if settings_dict['NAME']: kwargs['database'] = settings_dict['NAME'] if settings_dict['PASSWORD']: kwargs['passwd'] = settings_dict['PASSWORD'] if settings_dict['HOST'].startswith('/'): kwargs['unix_socket'] = settings_dict['HOST'] elif settings_dict['HOST']: kwargs['host'] = settings_dict['HOST'] if settings_dict['PORT']: kwargs['port'] = int(settings_dict['PORT']) # Raise exceptions for database warnings if DEBUG is on kwargs['raise_on_warnings'] = settings.DEBUG kwargs['client_flags'] = [ # Need potentially affected rows on UPDATE mysql.connector.constants.ClientFlag.FOUND_ROWS, ] try: kwargs.update(settings_dict['OPTIONS']) except KeyError: # OPTIONS missing is OK pass return kwargs def get_new_connection(self, conn_params): # Django 1.6 if not self.use_pure: conn_params['converter_class'] = DjangoCMySQLConverter else: conn_params['converter_class'] = DjangoMySQLConverter cnx = mysql.connector.connect(**conn_params) return cnx def init_connection_state(self): # Django 1.6 if self.mysql_version < (5, 5, 3): # See sysvar_sql_auto_is_null in MySQL Reference manual self.connection.cmd_query("SET SQL_AUTO_IS_NULL = 0") if 'AUTOCOMMIT' in self.settings_dict: try: # Django 1.6 self.set_autocommit(self.settings_dict['AUTOCOMMIT']) except AttributeError: self._set_autocommit(self.settings_dict['AUTOCOMMIT']) def create_cursor(self): # Django 1.6 cursor = self.connection.cursor() return CursorWrapper(cursor) def _connect(self): """Setup the connection with MySQL""" self.connection = self.get_new_connection(self.get_connection_params()) connection_created.send(sender=self.__class__, connection=self) self.init_connection_state() def _cursor(self): """Return a CursorWrapper object Returns a CursorWrapper """ try: # Django 1.6 return super(DatabaseWrapper, self)._cursor() except AttributeError: if not self.connection: self._connect() return self.create_cursor() def get_server_version(self): """Returns the MySQL server version of current connection Returns a tuple """ try: # Django 1.6 self.ensure_connection() except AttributeError: if not self.connection: self._connect() return self.connection.get_server_version() def disable_constraint_checking(self): """Disables foreign key checks Disables foreign key checks, primarily for use in adding rows with forward references. Always returns True, to indicate constraint checks need to be re-enabled. Returns True """ self.cursor().execute('SET @@session.foreign_key_checks = 0') return True def enable_constraint_checking(self): """Re-enable foreign key checks Re-enable foreign key checks after they have been disabled. """ # Override needs_rollback in case constraint_checks_disabled is # nested inside transaction.atomic. if django.VERSION >= (1, 6): self.needs_rollback, needs_rollback = False, self.needs_rollback try: self.cursor().execute('SET @@session.foreign_key_checks = 1') finally: if django.VERSION >= (1, 6): self.needs_rollback = needs_rollback def check_constraints(self, table_names=None): """Check rows in tables for invalid foreign key references Checks each table name in `table_names` for rows with invalid foreign key references. This method is intended to be used in conjunction with `disable_constraint_checking()` and `enable_constraint_checking()`, to determine if rows with invalid references were entered while constraint checks were off. Raises an IntegrityError on the first invalid foreign key reference encountered (if any) and provides detailed information about the invalid reference in the error message. Backends can override this method if they can more directly apply constraint checking (e.g. via "SET CONSTRAINTS ALL IMMEDIATE") """ ref_query = """ SELECT REFERRING.`{0}`, REFERRING.`{1}` FROM `{2}` as REFERRING LEFT JOIN `{3}` as REFERRED ON (REFERRING.`{4}` = REFERRED.`{5}`) WHERE REFERRING.`{6}` IS NOT NULL AND REFERRED.`{7}` IS NULL""" cursor = self.cursor() if table_names is None: table_names = self.introspection.table_names(cursor) for table_name in table_names: primary_key_column_name = \ self.introspection.get_primary_key_column(cursor, table_name) if not primary_key_column_name: continue key_columns = self.introspection.get_key_columns(cursor, table_name) for column_name, referenced_table_name, referenced_column_name \ in key_columns: cursor.execute(ref_query.format(primary_key_column_name, column_name, table_name, referenced_table_name, column_name, referenced_column_name, column_name, referenced_column_name)) for bad_row in cursor.fetchall(): msg = ("The row in table '{0}' with primary key '{1}' has " "an invalid foreign key: {2}.{3} contains a value " "'{4}' that does not have a corresponding value in " "{5}.{6}.".format(table_name, bad_row[0], table_name, column_name, bad_row[1], referenced_table_name, referenced_column_name)) raise utils.IntegrityError(msg) def _rollback(self): try: BaseDatabaseWrapper._rollback(self) except NotSupportedError: pass def _set_autocommit(self, autocommit): # Django 1.6 with self.wrap_database_errors: self.connection.autocommit = autocommit def schema_editor(self, *args, **kwargs): """Returns a new instance of this backend's SchemaEditor""" # Django 1.7 return DatabaseSchemaEditor(self, *args, **kwargs) def is_usable(self): # Django 1.6 return self.connection.is_connected() @cached_property def mysql_version(self): config = self.get_connection_params() temp_conn = mysql.connector.connect(**config) server_version = temp_conn.get_server_version() temp_conn.close() return server_version @property def use_pure(self): return not HAVE_CEXT or self._use_pure mysql-utilities-1.6.4/mysql/connector/fabric/0000755001577100752670000000000012747674052020765 5ustar pb2usercommonmysql-utilities-1.6.4/mysql/connector/fabric/__init__.py0000644001577100752670000000424512717544565023105 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """MySQL Fabric support""" from collections import namedtuple # Order of field_names must match how Fabric is returning the data FabricMySQLServer = namedtuple( 'FabricMySQLServer', ['uuid', 'group', 'host', 'port', 'mode', 'status', 'weight'] ) # Order of field_names must match how Fabric is returning the data FabricShard = namedtuple( 'FabricShard', ['database', 'table', 'column', 'key', 'shard', 'shard_type', 'group', 'global_group'] ) from .connection import ( MODE_READONLY, MODE_READWRITE, STATUS_PRIMARY, STATUS_SECONDARY, SCOPE_GLOBAL, SCOPE_LOCAL, Fabric, FabricConnection, MySQLFabricConnection, FabricSet, ) def connect(**kwargs): """Create a MySQLFabricConnection object""" return MySQLFabricConnection(**kwargs) __all__ = [ 'MODE_READWRITE', 'MODE_READONLY', 'STATUS_PRIMARY', 'STATUS_SECONDARY', 'SCOPE_GLOBAL', 'SCOPE_LOCAL', 'FabricMySQLServer', 'FabricShard', 'connect', 'Fabric', 'FabricConnection', 'MySQLFabricConnection', 'FabricSet', ] mysql-utilities-1.6.4/mysql/connector/fabric/caching.py0000644001577100752670000002220612717544565022737 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Implementing caching mechanisms for MySQL Fabric""" import bisect from datetime import datetime, timedelta from hashlib import sha1 import logging import threading from . import FabricShard _LOGGER = logging.getLogger('myconnpy-fabric') _CACHE_TTL = 1 * 60 # 1 minute def insort_right_rev(alist, new_element, low=0, high=None): """Similar to bisect.insort_right but for reverse sorted lists This code is similar to the Python code found in Lib/bisect.py. We simply change the comparison from 'less than' to 'greater than'. """ if low < 0: raise ValueError('low must be non-negative') if high is None: high = len(alist) while low < high: middle = (low + high) // 2 if new_element > alist[middle]: high = middle else: low = middle + 1 alist.insert(low, new_element) class CacheEntry(object): """Base class for MySQL Fabric cache entries""" def __init__(self, version=None, fabric_uuid=None, ttl=_CACHE_TTL): self.version = version self.fabric_uuid = fabric_uuid self.last_updated = datetime.utcnow() self._ttl = ttl @classmethod def hash_index(cls, part1, part2=None): """Create hash for indexing""" raise NotImplementedError @property def invalid(self): """Returns True if entry is not valid any longer This property returns True when the entry is not valid any longer. The entry is valid when now > (last updated + ttl), where ttl is in seconds. """ if not self.last_updated: return False atime = self.last_updated + timedelta(seconds=self._ttl) return datetime.utcnow() > atime def reset_ttl(self): """Reset the Time to Live""" self.last_updated = datetime.utcnow() def invalidate(self): """Invalidates the cache entry""" self.last_updated = None class CacheShardTable(CacheEntry): """Cache entry for a Fabric sharded table""" def __init__(self, shard, version=None, fabric_uuid=None): if not isinstance(shard, FabricShard): raise ValueError("shard argument must be a FabricShard instance") super(CacheShardTable, self).__init__(version=version, fabric_uuid=fabric_uuid) self.partitioning = {} self._shard = shard self.keys = [] self.keys_reversed = [] if shard.key and shard.group: self.add_partition(shard.key, shard.group) def __getattr__(self, attr): return getattr(self._shard, attr) def add_partition(self, key, group): """Add sharding information for a group""" if self.shard_type == 'RANGE': key = int(key) elif self.shard_type == 'RANGE_DATETIME': try: if ':' in key: key = datetime.strptime(key, "%Y-%m-%d %H:%M:%S") else: key = datetime.strptime(key, "%Y-%m-%d").date() except: raise ValueError( "RANGE_DATETIME key could not be parsed, was: {0}".format( key )) elif self.shard_type == 'RANGE_STRING': pass elif self.shard_type == "HASH": pass else: raise ValueError("Unsupported sharding type {0}".format( self.shard_type )) self.partitioning[key] = { 'group': group, } self.reset_ttl() bisect.insort_right(self.keys, key) insort_right_rev(self.keys_reversed, key) @classmethod def hash_index(cls, part1, part2=None): """Create hash for indexing""" return sha1(part1.encode('utf-8') + part2.encode('utf-8')).hexdigest() def __repr__(self): return "{class_}({database}.{table}.{column})".format( class_=self.__class__, database=self.database, table=self.table, column=self.column ) class CacheGroup(CacheEntry): """Cache entry for a Fabric group""" def __init__(self, group_name, servers): super(CacheGroup, self).__init__(version=None, fabric_uuid=None) self.group_name = group_name self.servers = servers @classmethod def hash_index(cls, part1, part2=None): """Create hash for indexing""" return sha1(part1.encode('utf-8')).hexdigest() def __repr__(self): return "{class_}({group})".format( class_=self.__class__, group=self.group_name, ) class FabricCache(object): """Singleton class for caching Fabric data Only one instance of this class can exists globally. """ def __init__(self, ttl=_CACHE_TTL): self._ttl = ttl self._sharding = {} self._groups = {} self.__sharding_lock = threading.Lock() self.__groups_lock = threading.Lock() def remove_group(self, entry_hash): """Remove cache entry for group""" with self.__groups_lock: try: del self._groups[entry_hash] except KeyError: # not cached, that's OK pass else: _LOGGER.debug("Group removed from cache") def remove_shardtable(self, entry_hash): """Remove cache entry for shard""" with self.__sharding_lock: try: del self._sharding[entry_hash] except KeyError: # not cached, that's OK pass def sharding_cache_table(self, shard, version=None, fabric_uuid=None): """Cache information about a shard""" entry_hash = CacheShardTable.hash_index(shard.database, shard.table) with self.__sharding_lock: try: entry = self._sharding[entry_hash] entry.add_partition(shard.key, shard.group) except KeyError: # New cache entry entry = CacheShardTable(shard, version=version, fabric_uuid=fabric_uuid) self._sharding[entry_hash] = entry def cache_group(self, group_name, servers): """Cache information about a group""" entry_hash = CacheGroup.hash_index(group_name) with self.__groups_lock: try: entry = self._groups[entry_hash] entry.servers = servers entry.reset_ttl() _LOGGER.debug("Recaching group {0} with {1}".format( group_name, servers)) except KeyError: # New cache entry entry = CacheGroup(group_name, servers) self._groups[entry_hash] = entry _LOGGER.debug("Caching group {0} with {1}".format( group_name, servers)) def sharding_search(self, database, table): """Search cache for a shard based on database and table""" entry_hash = CacheShardTable.hash_index(database, table) entry = None try: entry = self._sharding[entry_hash] if entry.invalid: _LOGGER.debug("{0} invalidated".format(entry)) self.remove_shardtable(entry_hash) return None except KeyError: # Nothing in cache return None return entry def group_search(self, group_name): """Search cache for a group based on its name""" entry_hash = CacheGroup.hash_index(group_name) entry = None try: entry = self._groups[entry_hash] if entry.invalid: _LOGGER.debug("{0} invalidated".format(entry)) self.remove_group(entry_hash) return None except KeyError: # Nothing in cache return None return entry def __repr__(self): return "{class_}(groups={nrgroups},shards={nrshards})".format( class_=self.__class__, nrgroups=len(self._groups), nrshards=len(self._sharding) ) mysql-utilities-1.6.4/mysql/connector/fabric/connection.py0000644001577100752670000015602512717544565023511 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Implementing communication with MySQL Fabric""" import sys import datetime import time import uuid from base64 import b16decode from bisect import bisect from hashlib import md5 import logging import socket import collections # pylint: disable=F0401,E0611 try: from xmlrpclib import Fault, ServerProxy, Transport import urllib2 from httplib import BadStatusLine except ImportError: # Python v3 from xmlrpc.client import Fault, ServerProxy, Transport import urllib.request as urllib2 from http.client import BadStatusLine if sys.version_info[0] == 2: try: from httplib import HTTPSConnection except ImportError: HAVE_SSL = False else: HAVE_SSL = True else: try: from http.client import HTTPSConnection except ImportError: HAVE_SSL = False else: HAVE_SSL = True # pylint: enable=F0401,E0611 import mysql.connector from ..connection import MySQLConnection from ..conversion import MySQLConverter from ..pooling import MySQLConnectionPool from ..errors import ( Error, InterfaceError, NotSupportedError, MySQLFabricError, InternalError, DatabaseError ) from ..cursor import ( MySQLCursor, MySQLCursorBuffered, MySQLCursorRaw, MySQLCursorBufferedRaw ) from .. import errorcode from . import FabricMySQLServer, FabricShard from .caching import FabricCache from .balancing import WeightedRoundRobin from .. import version from ..catch23 import PY2, isunicode, UNICODE_TYPES RESET_CACHE_ON_ERROR = ( errorcode.CR_SERVER_LOST, errorcode.ER_OPTION_PREVENTS_STATEMENT, ) # Errors to be reported to Fabric REPORT_ERRORS = ( errorcode.CR_SERVER_LOST, errorcode.CR_SERVER_GONE_ERROR, errorcode.CR_CONN_HOST_ERROR, errorcode.CR_CONNECTION_ERROR, errorcode.CR_IPSOCK_ERROR, ) REPORT_ERRORS_EXTRA = [] DEFAULT_FABRIC_PROTOCOL = 'xmlrpc' MYSQL_FABRIC_PORT = { 'xmlrpc': 32274, 'mysql': 32275 } FABRICS = {} # For attempting to connect with Fabric _CNX_ATTEMPT_DELAY = 1 _CNX_ATTEMPT_MAX = 3 _GETCNX_ATTEMPT_DELAY = 1 _GETCNX_ATTEMPT_MAX = 3 MODE_READONLY = 1 MODE_WRITEONLY = 2 MODE_READWRITE = 3 STATUS_FAULTY = 0 STATUS_SPARE = 1 STATUS_SECONDARY = 2 STATUS_PRIMARY = 3 SCOPE_GLOBAL = 'GLOBAL' SCOPE_LOCAL = 'LOCAL' _SERVER_STATUS_FAULTY = 'FAULTY' _CNX_PROPERTIES = { # name: ((valid_types), description, default) 'group': ((str,), "Name of group of servers", None), 'key': (tuple([int, str, datetime.datetime, datetime.date] + list(UNICODE_TYPES)), "Sharding key", None), 'tables': ((tuple, list), "List of tables in query", None), 'mode': ((int,), "Read-Only, Write-Only or Read-Write", MODE_READWRITE), 'shard': ((str,), "Identity of the shard for direct connection", None), 'mapping': ((str,), "", None), 'scope': ((str,), "GLOBAL for accessing Global Group, or LOCAL", SCOPE_LOCAL), 'attempts': ((int,), "Attempts for getting connection", _GETCNX_ATTEMPT_MAX), 'attempt_delay': ((int,), "Seconds to wait between each attempt", _GETCNX_ATTEMPT_DELAY), } _LOGGER = logging.getLogger('myconnpy-fabric') class MySQLRPCProtocol(object): """Class using MySQL protocol to query Fabric. """ def __init__(self, fabric, host, port, connect_attempts, connect_delay): self.converter = MySQLConverter() self.handler = FabricMySQLConnection(fabric, host, port, connect_attempts, connect_delay) self.handler.connect() def _process_params_dict(self, params): """Process query parameters given as dictionary""" try: res = [] for key, value in list(params.items()): conv = value conv = self.converter.to_mysql(conv) conv = self.converter.escape(conv) conv = self.converter.quote(conv) res.append("{0}={1}".format(key, str(conv))) except Exception as err: raise mysql.connector.errors.ProgrammingError( "Failed processing pyformat-parameters; %s" % err) else: return res def _process_params(self, params): """Process query parameters.""" try: res = params res = [self.converter.to_mysql(i) for i in res] res = [self.converter.escape(i) for i in res] res = [self.converter.quote(i) for i in res] res = [str(i) for i in res] except Exception as err: raise mysql.connector.errors.ProgrammingError( "Failed processing format-parameters; %s" % err) else: return tuple(res) def _execute_cmd(self, stmt, params=None): """Executes the given query Returns a list containing response from Fabric """ if not params: params = () cur = self.handler.connection.cursor(dictionary=True) results = [] for res in cur.execute(stmt, params, multi=True): results.append(res.fetchall()) return results def create_params(self, *args, **kwargs): """Process arguments to create query parameters. """ params = [] if args: args = self._process_params(args) params.extend(args) if kwargs: kwargs = self._process_params_dict(kwargs) params.extend(kwargs) params = ', '.join(params) return params def execute(self, group, command, *args, **kwargs): """Executes the given command with MySQL protocol Executes the given command with the given parameters. Returns an iterator to navigate to navigate through the result set returned by Fabric """ params = self.create_params(*args, **kwargs) cmd = "CALL {0}.{1}({2})".format(group, command, params) fab_set = None try: data = self._execute_cmd(cmd) fab_set = FabricMySQLSet(data) except (Fault, socket.error, InterfaceError) as exc: msg = "Executing {group}.{command} failed: {error}".format( group=group, command=command, error=str(exc)) raise InterfaceError(msg) return fab_set class XMLRPCProtocol(object): """Class using XML-RPC protocol to query Fabric. """ def __init__(self, fabric, host, port, connect_attempts, connect_delay): self.handler = FabricXMLRPCConnection(fabric, host, port, connect_attempts, connect_delay) self.handler.connect() def execute(self, group, command, *args, **kwargs): """Executes the given command with XML-RPC protocol Executes the given command with the given parameters Returns an iterator to navigate to navigate through the result set returned by Fabric """ try: grp = getattr(self.handler.proxy, group) cmd = getattr(grp, command) except AttributeError as exc: raise ValueError("{group}.{command} not available ({err})".format( group=group, command=command, err=str(exc))) fab_set = None try: data = cmd(*args, **kwargs) fab_set = FabricSet(data) except (Fault, socket.error, InterfaceError) as exc: msg = "Executing {group}.{command} failed: {error}".format( group=group, command=command, error=str(exc)) raise InterfaceError(msg) return fab_set class FabricMySQLResponse(object): """Class used to parse a response got from Fabric with MySQL protocol. """ def __init__(self, data): info = data[0][0] (fabric_uuid_str, ttl, error) = (info['fabric_uuid'], info['ttl'], info['message']) if error: raise InterfaceError(error) self.fabric_uuid_str = fabric_uuid_str self.ttl = ttl self.coded_rows = data[1] class FabricMySQLSet(FabricMySQLResponse): """Iterator to navigate through the result set returned from Fabric with MySQL Protocol. """ def __init__(self, data): """Initialize the FabricSet object. """ super(FabricMySQLSet, self).__init__(data) self.__names = self.coded_rows[0].keys() self.__rows = self.coded_rows self.__result = collections.namedtuple('ResultSet', self.__names) def rowcount(self): """The number of rows in the result set. """ return len(self.__rows) def rows(self): """Iterate over the rows of the result set. Each row is a named tuple. """ for row in self.__rows: yield self.__result(**row) def row(self, index): """Indexing method for a row. Each row is a named tuple. """ return self.__result(**self.__rows[index]) class FabricResponse(object): """Class used to parse a response got from Fabric. """ SUPPORTED_VERSION = 1 def __init__(self, data): """Initialize the FabricResponse object """ (format_version, fabric_uuid_str, ttl, error, rows) = data if error: raise InterfaceError(error) if format_version != FabricResponse.SUPPORTED_VERSION: raise InterfaceError( "Supported protocol has version {sversion}. Got a response " "from MySQL Fabric with version {gversion}.".format( sversion=FabricResponse.SUPPORTED_VERSION, gversion=format_version) ) self.format_version = format_version self.fabric_uuid_str = fabric_uuid_str self.ttl = ttl self.coded_rows = rows class FabricSet(FabricResponse): """Iterator to navigate through the result set returned from Fabric """ def __init__(self, data): """Initialize the FabricSet object. """ super(FabricSet, self).__init__(data) assert len(self.coded_rows) == 1 self.__names = self.coded_rows[0]['info']['names'] self.__rows = self.coded_rows[0]['rows'] assert all(len(self.__names) == len(row) for row in self.__rows) or \ len(self.__rows) == 0 self.__result = collections.namedtuple('ResultSet', self.__names) def rowcount(self): """The number of rows in the result set. """ return len(self.__rows) def rows(self): """Iterate over the rows of the result set. Each row is a named tuple. """ for row in self.__rows: yield self.__result(*row) def row(self, index): """Indexing method for a row. Each row is a named tuple. """ return self.__result(*self.__rows[index]) def extra_failure_report(error_codes): """Add MySQL error to be reported to Fabric This function adds error_codes to the error list to be reported to Fabric. To reset the custom error reporting list, pass None or empty list. The error_codes argument can be either a MySQL error code defined in the errorcode module, or list of error codes. Raises AttributeError when code is not an int. """ global REPORT_ERRORS_EXTRA # pylint: disable=W0603 if not error_codes: REPORT_ERRORS_EXTRA = [] if not isinstance(error_codes, (list, tuple)): error_codes = [error_codes] for code in error_codes: if not isinstance(code, int) or not (code >= 1000 and code < 3000): raise AttributeError("Unknown or invalid error code.") REPORT_ERRORS_EXTRA.append(code) def _fabric_xmlrpc_uri(host, port): """Create an XMLRPC URI for connecting to Fabric This method will create a URI using the host and TCP/IP port suitable for connecting to a MySQL Fabric instance. Returns a URI. """ return 'http://{host}:{port}'.format(host=host, port=port) def _fabric_server_uuid(host, port): """Create a UUID using host and port""" return uuid.uuid3(uuid.NAMESPACE_URL, _fabric_xmlrpc_uri(host, port)) def _validate_ssl_args(ssl_ca, ssl_key, ssl_cert): """Validate the SSL argument. Raises AttributeError is required argument is not set. Returns dict or None. """ if not HAVE_SSL: raise InterfaceError("Python does not support SSL") if any([ssl_ca, ssl_key, ssl_cert]): if not ssl_ca: raise AttributeError("Missing ssl_ca argument.") if (ssl_key or ssl_cert) and not (ssl_key and ssl_cert): raise AttributeError( "ssl_key and ssl_cert need to be both " "specified, or neither." ) return { 'ca': ssl_ca, 'key': ssl_key, 'cert': ssl_cert, } return None if HAVE_SSL: class FabricHTTPSHandler(urllib2.HTTPSHandler): """Class handling HTTPS connections""" def __init__(self, ssl_config): #pylint: disable=E1002 """Initialize""" if PY2: urllib2.HTTPSHandler.__init__(self) else: super().__init__() # pylint: disable=W0104 self._ssl_config = ssl_config def https_open(self, req): """Open HTTPS connection""" return self.do_open(self.get_https_connection, req) def get_https_connection(self, host, timeout=300): """Returns a HTTPSConnection""" return HTTPSConnection( host, key_file=self._ssl_config['key'], cert_file=self._ssl_config['cert'] ) class FabricTransport(Transport): """Custom XMLRPC Transport for Fabric""" user_agent = 'MySQL Connector Python/{0}'.format(version.VERSION_TEXT) def __init__(self, username, password, #pylint: disable=E1002 verbose=0, use_datetime=False, https_handler=None): """Initialize""" if PY2: Transport.__init__(self, use_datetime=False) else: super().__init__(use_datetime=False) self._username = username self._password = password self._use_datetime = use_datetime self.verbose = verbose self._username = username self._password = password self._handlers = [] if self._username and self._password: self._passmgr = urllib2.HTTPPasswordMgrWithDefaultRealm() self._auth_handler = urllib2.HTTPDigestAuthHandler(self._passmgr) else: self._auth_handler = None self._passmgr = None if https_handler: self._handlers.append(https_handler) self._scheme = 'https' else: self._scheme = 'http' if self._auth_handler: self._handlers.append(self._auth_handler) def request(self, host, handler, request_body, verbose=0): """Send XMLRPC request""" uri = '{scheme}://{host}{handler}'.format(scheme=self._scheme, host=host, handler=handler) if self._passmgr: self._passmgr.add_password(None, uri, self._username, self._password) if self.verbose: _LOGGER.debug("FabricTransport: {0}".format(uri)) opener = urllib2.build_opener(*self._handlers) headers = { 'Content-Type': 'text/xml', 'User-Agent': self.user_agent, } req = urllib2.Request(uri, request_body, headers=headers) try: return self.parse_response(opener.open(req)) except (urllib2.URLError, urllib2.HTTPError) as exc: try: code = -1 if exc.code == 400: reason = 'Permission denied' code = exc.code else: reason = exc.reason msg = "{reason} ({code})".format(reason=reason, code=code) except AttributeError: if 'SSL' in str(exc): msg = "SSL error" else: msg = str(exc) raise InterfaceError("Connection with Fabric failed: " + msg) except BadStatusLine: raise InterfaceError("Connection with Fabric failed: check SSL") class Fabric(object): """Class managing MySQL Fabric instances""" def __init__(self, host, username=None, password=None, port=None, connect_attempts=_CNX_ATTEMPT_MAX, connect_delay=_CNX_ATTEMPT_DELAY, report_errors=False, ssl_ca=None, ssl_key=None, ssl_cert=None, user=None, protocol=DEFAULT_FABRIC_PROTOCOL): """Initialize""" if protocol == 'xmlrpc': self._protocol_class = XMLRPCProtocol elif protocol == 'mysql': self._protocol_class = MySQLRPCProtocol else: raise InterfaceError( "Protocol not supported by MySQL Fabric," " was '{}'".format(protocol)) if not port: port = MYSQL_FABRIC_PORT[protocol] self._fabric_instances = {} self._fabric_uuid = None self._ttl = 1 * 60 # one minute by default self._version_token = None self._connect_attempts = connect_attempts self._connect_delay = connect_delay self._cache = FabricCache() self._group_balancers = {} self._init_host = host self._init_port = port self._ssl = _validate_ssl_args(ssl_ca, ssl_key, ssl_cert) self._report_errors = report_errors self._protocol = protocol if user and username: raise ValueError("can not specify both user and username") self._username = user or username self._password = password @property def username(self): """Return username used to authenticate with Fabric""" return self._username @property def password(self): """Return password used to authenticate with Fabric""" return self._password @property def ssl_config(self): """Return the SSL configuration""" return self._ssl def seed(self, host=None, port=None): """Get MySQL Fabric Instances This method uses host and port to connect to a MySQL Fabric server and get all the instances managing the same metadata. Raises InterfaceError on errors. """ host = host or self._init_host port = port or self._init_port fabinst = self._protocol_class(self, host, port, connect_attempts=self._connect_attempts, connect_delay=self._connect_delay) fabric_uuid, fabric_version, ttl, fabrics = self.get_fabric_servers( fabinst) if not fabrics: # Raise, something went wrong. raise InterfaceError("Failed getting list of Fabric servers") if self._version_token == fabric_version: return _LOGGER.info( "Loading Fabric configuration version {version}".format( version=fabric_version)) self._fabric_uuid = fabric_uuid self._version_token = fabric_version if ttl > 0: self._ttl = ttl # Update the Fabric servers for fabric in fabrics: inst = self._protocol_class(self, fabric['host'], fabric['port'], connect_attempts=self._connect_attempts, connect_delay=self._connect_delay) inst_uuid = inst.handler.uuid if inst_uuid not in self._fabric_instances: self._fabric_instances[inst_uuid] = inst _LOGGER.debug( "Added new Fabric server {host}:{port}".format( host=inst.handler.host, port=inst.handler.port)) def reset_cache(self, group=None): """Reset cached information This method destroys all cached information. """ if group: _LOGGER.debug("Resetting cache for group '{group}'".format( group=group)) self.get_group_servers(group, use_cache=False) else: _LOGGER.debug("Resetting cache") self._cache = FabricCache() def get_instance(self): """Get a MySQL Fabric Instance This method will get the next available MySQL Fabric Instance. Raises InterfaceError when no instance is available or connected. """ nxt = 0 errmsg = "No MySQL Fabric instance available" if not self._fabric_instances: raise InterfaceError(errmsg + " (not seeded?)") if PY2: instance_list = self._fabric_instances.keys() inst = self._fabric_instances[instance_list[nxt]] else: inst = self._fabric_instances[list(self._fabric_instances)[nxt]] if not inst.handler.is_connected: inst.handler.connect() return inst def report_failure(self, server_uuid, errno): """Report failure to Fabric This method sets the status of a MySQL server identified by server_uuid. """ if not self._report_errors: return errno = int(errno) current_host = socket.getfqdn() if errno in REPORT_ERRORS or errno in REPORT_ERRORS_EXTRA: _LOGGER.debug("Reporting error %d of server %s", errno, server_uuid) inst = self.get_instance() try: data = inst.execute('threat', 'report_failure', server_uuid, current_host, errno) FabricResponse(data) except (Fault, socket.error) as exc: _LOGGER.debug("Failed reporting server to Fabric (%s)", str(exc)) # Not requiring further action def get_fabric_servers(self, fabric_cnx=None): """Get all MySQL Fabric instances This method looks up the other MySQL Fabric instances which uses the same metadata. The returned list contains dictionaries with connection information such ass host and port. For example: [ {'host': 'fabric_prod_1.example.com', 'port': 32274 }, {'host': 'fabric_prod_2.example.com', 'port': 32274 }, ] Returns a list of dictionaries """ inst = fabric_cnx or self.get_instance() result = [] err_msg = "Looking up Fabric servers failed using {host}:{port}: {err}" try: fset = inst.execute('dump', 'fabric_nodes', "protocol." + self._protocol) for row in fset.rows(): result.append({'host': row.host, 'port': row.port}) except (Fault, socket.error) as exc: msg = err_msg.format(err=str(exc), host=inst.handler.host, port=inst.handler.port) raise InterfaceError(msg) except (TypeError, AttributeError) as exc: msg = err_msg.format( err="No Fabric server available ({0})".format(exc), host=inst.handler.host, port=inst.handler.port) raise InterfaceError(msg) try: fabric_uuid = uuid.UUID(fset.fabric_uuid_str) except TypeError: fabric_uuid = uuid.uuid4() fabric_version = 0 return fabric_uuid, fabric_version, fset.ttl, result def get_group_servers(self, group, use_cache=True): """Get all MySQL servers in a group This method returns information about all MySQL part of the given high-availability group. When use_cache is set to True, the cached information will be used. Raises InterfaceError on errors. Returns list of FabricMySQLServer objects. """ # Get group information from cache if use_cache: entry = self._cache.group_search(group) if entry: # Cache group information return entry.servers inst = self.get_instance() result = [] try: fset = inst.execute('dump', 'servers', self._version_token, group) except (Fault, socket.error) as exc: msg = ("Looking up MySQL servers failed for group " "{group}: {error}").format(error=str(exc), group=group) raise InterfaceError(msg) weights = [] for row in fset.rows(): # We make sure, when using local groups, we skip the global group if row.group_id == group: mysqlserver = FabricMySQLServer( row.server_uuid, row.group_id, row.host, row.port, row.mode, row.status, row.weight ) result.append(mysqlserver) if mysqlserver.status == STATUS_SECONDARY: weights.append((mysqlserver.uuid, mysqlserver.weight)) self._cache.cache_group(group, result) if weights: self._group_balancers[group] = WeightedRoundRobin(*weights) return result def get_group_server(self, group, mode=None, status=None): """Get a MySQL server from a group The method uses MySQL Fabric to get the correct MySQL server for the specified group. You can specify mode or status, but not both. The mode argument will decide whether the primary or a secondary server is returned. When no secondary server is available, the primary is returned. Status is used to force getting either a primary or a secondary. The returned tuple contains host, port and uuid. Raises InterfaceError on errors; ValueError when both mode and status are given. Returns a FabricMySQLServer object. """ if mode and status: raise ValueError( "Either mode or status must be given, not both") errmsg = "No MySQL server available for group '{group}'" servers = self.get_group_servers(group, use_cache=True) if not servers: raise InterfaceError(errmsg.format(group=group)) # Get the Master and return list (host, port, UUID) primary = None secondary = [] for server in servers: if server.status == STATUS_SECONDARY: secondary.append(server) elif server.status == STATUS_PRIMARY: primary = server if mode in (MODE_WRITEONLY, MODE_READWRITE) or status == STATUS_PRIMARY: if not primary: self.reset_cache(group=group) raise InterfaceError((errmsg + ' {query}={value}').format( query='status' if status else 'mode', group=group, value=status or mode)) return primary # Return primary if no secondary is available if not secondary and primary: return primary elif group in self._group_balancers: next_secondary = self._group_balancers[group].get_next()[0] for mysqlserver in secondary: if next_secondary == mysqlserver.uuid: return mysqlserver self.reset_cache(group=group) raise InterfaceError(errmsg.format(group=group, mode=mode)) def get_sharding_information(self, tables=None, database=None): """Get and cache the sharding information for given tables This method is fetching sharding information from MySQL Fabric and caches the result. The tables argument must be sequence of sequences contain the name of the database and table. If no database is given, the value for the database argument will be used. Examples: tables = [('salary',), ('employees',)] get_sharding_information(tables, database='employees') tables = [('salary', 'employees'), ('employees', employees)] get_sharding_information(tables) Raises InterfaceError on errors; ValueError when something is wrong with the tables argument. """ if not isinstance(tables, (list, tuple)): raise ValueError("tables should be a sequence") patterns = [] for table in tables: if not isinstance(table, (list, tuple)) and not database: raise ValueError("No database specified for table {0}".format( table)) if isinstance(table, (list, tuple)): dbase = table[1] tbl = table[0] else: dbase = database tbl = table patterns.append("{0}.{1}".format(dbase, tbl)) inst = self.get_instance() try: fset = inst.execute( 'dump', 'sharding_information', self._version_token, ','.join(patterns) ) except (Fault, socket.error) as exc: msg = "Looking up sharding information failed : {error}".format( error=str(exc)) raise InterfaceError(msg) for row in fset.rows(): self._cache.sharding_cache_table( FabricShard(row.schema_name, row.table_name, row.column_name, row.lower_bound, row.shard_id, row.type_name, row.group_id, row.global_group) ) def get_shard_server(self, tables, key, scope=SCOPE_LOCAL, mode=None): """Get MySQL server information for a particular shard Raises DatabaseError when the table is unknown or when tables are not on the same shard. ValueError is raised when there is a problem with the methods arguments. InterfaceError is raised for other errors. """ if not isinstance(tables, (list, tuple)): raise ValueError("tables should be a sequence") groups = [] for dbobj in tables: try: database, table = dbobj.split('.') except ValueError: raise ValueError( "tables should be given as .
, " "was {0}".format(dbobj)) entry = self._cache.sharding_search(database, table) if not entry: self.get_sharding_information((table,), database) entry = self._cache.sharding_search(database, table) if not entry: raise DatabaseError( errno=errorcode.ER_BAD_TABLE_ERROR, msg="Unknown table '{database}.{table}'".format( database=database, table=table)) if scope == 'GLOBAL': return self.get_group_server(entry.global_group, mode=mode) if entry.shard_type == 'RANGE': try: range_key = int(key) except ValueError: raise ValueError("Key must be an integer for RANGE") partitions = entry.keys index = partitions[bisect(partitions, range_key) - 1] partition = entry.partitioning[index] elif entry.shard_type == 'RANGE_DATETIME': if not isinstance(key, (datetime.date, datetime.datetime)): raise ValueError( "Key must be datetime.date or datetime.datetime for " "RANGE_DATETIME") index = None for partkey in entry.keys_reversed: if key >= partkey: index = partkey break try: partition = entry.partitioning[index] except KeyError: raise ValueError("Key invalid; was '{0}'".format(key)) elif entry.shard_type == 'RANGE_STRING': if not isunicode(key): raise ValueError("Key must be a unicode value") index = None for partkey in entry.keys_reversed: if key >= partkey: index = partkey break try: partition = entry.partitioning[index] except KeyError: raise ValueError("Key invalid; was '{0}'".format(key)) elif entry.shard_type == 'HASH': md5key = md5(str(key)) index = entry.keys_reversed[-1] for partkey in entry.keys_reversed: if md5key.digest() >= b16decode(partkey): index = partkey break partition = entry.partitioning[index] else: raise InterfaceError( "Unsupported sharding type {0}".format(entry.shard_type)) groups.append(partition['group']) if not all(group == groups[0] for group in groups): raise DatabaseError( "Tables are located in different shards.") return self.get_group_server(groups[0], mode=mode) def execute(self, group, command, *args, **kwargs): """Execute a Fabric command from given group This method will execute the given Fabric command from the given group using the given arguments. It returns an instance of FabricSet. Raises ValueError when group.command is not valid and raises InterfaceError when an error occurs while executing. Returns FabricSet. """ inst = self.get_instance() return inst.execute(group, command, *args, **kwargs) class FabricConnection(object): """Base Class for a class holding a connection to a MySQL Fabric server """ def __init__(self, fabric, host, port=MYSQL_FABRIC_PORT[DEFAULT_FABRIC_PROTOCOL], connect_attempts=_CNX_ATTEMPT_MAX, connect_delay=_CNX_ATTEMPT_DELAY): """Initialize""" if not isinstance(fabric, Fabric): raise ValueError("fabric must be instance of class Fabric") self._fabric = fabric self._host = host self._port = port self._connect_attempts = connect_attempts self._connect_delay = connect_delay @property def host(self): """Returns server IP or name of current Fabric connection""" return self._host @property def port(self): """Returns TCP/IP port of current Fabric connection""" return self._port @property def uuid(self): """Returns UUID of the Fabric server we are connected with""" return _fabric_server_uuid(self._host, self._port) def connect(self): """Connect with MySQL Fabric""" pass @property def is_connected(self): """Check whether connection with Fabric is valid Return True if we can still interact with the Fabric server; False if Not. Returns True or False. """ pass def __repr__(self): return "{class_}(host={host}, port={port})".format( class_=self.__class__, host=self._host, port=self._port, ) class FabricXMLRPCConnection(FabricConnection): """Class holding a connection to a MySQL Fabric server through XML-RPC""" def __init__(self, fabric, host, port=MYSQL_FABRIC_PORT['xmlrpc'], connect_attempts=_CNX_ATTEMPT_MAX, connect_delay=_CNX_ATTEMPT_DELAY): """Initialize""" super(FabricXMLRPCConnection, self).__init__( fabric, host, port, connect_attempts, connect_delay ) self._proxy = None @property def proxy(self): """Returns the XMLRPC Proxy of current Fabric connection""" return self._proxy @property def uri(self): """Returns the XMLRPC URI for current Fabric connection""" return _fabric_xmlrpc_uri(self._host, self._port) def _xmlrpc_get_proxy(self): """Return the XMLRPC server proxy instance to MySQL Fabric This method tries to get a valid connection to a MySQL Fabric server. Returns a XMLRPC ServerProxy instance. """ if self.is_connected: return self._proxy attempts = self._connect_attempts delay = self._connect_delay proxy = None counter = 0 while counter != attempts: counter += 1 try: if self._fabric.ssl_config: if not HAVE_SSL: raise InterfaceError("Python does not support SSL") https_handler = FabricHTTPSHandler(self._fabric.ssl_config) else: https_handler = None transport = FabricTransport(self._fabric.username, self._fabric.password, verbose=0, https_handler=https_handler) proxy = ServerProxy(self.uri, transport=transport, verbose=0) proxy._some_nonexisting_method() # pylint: disable=W0212 except Fault: # We are actually connected return proxy except socket.error as exc: if counter == attempts: raise InterfaceError( "Connection to MySQL Fabric failed ({0})".format(exc)) _LOGGER.debug( "Retrying {host}:{port}, attempts {counter}".format( host=self.host, port=self.port, counter=counter)) if delay > 0: time.sleep(delay) def connect(self): """Connect with MySQL Fabric""" self._proxy = self._xmlrpc_get_proxy() @property def is_connected(self): """Check whether connection with Fabric is valid Return True if we can still interact with the Fabric server; False if Not. Returns True or False. """ try: self._proxy._some_nonexisting_method() # pylint: disable=W0212 except Fault: return True except (TypeError, AttributeError): return False else: return False class FabricMySQLConnection(FabricConnection): """ Class holding a connection to a MySQL Fabric server through MySQL protocol """ def __init__(self, fabric, host, port=MYSQL_FABRIC_PORT['mysql'], connect_attempts=_CNX_ATTEMPT_MAX, connect_delay=_CNX_ATTEMPT_DELAY): """Initialize""" super(FabricMySQLConnection, self).__init__( fabric, host, port=port, connect_attempts=connect_attempts, connect_delay=connect_delay ) self._connection = None @property def connection(self): """Returns the MySQL RPC Connection to Fabric""" return self._connection def _get_connection(self): """Return the connection instance to MySQL Fabric through MySQL RPC This method tries to get a valid connection to a MySQL Fabric server. Returns a MySQLConnection instance. """ if self.is_connected: return self._connection attempts = self._connect_attempts delay = self._connect_delay counter = 0 while counter != attempts: counter += 1 try: dbconfig = { 'host': self._host, 'port': self._port, 'user': self._fabric.username, 'password': self._fabric.password } if self._fabric.ssl_config: if not HAVE_SSL: raise InterfaceError("Python does not support SSL") dbconfig['ssl_key'] = self._fabric.ssl_config['key'] dbconfig['ssl_cert'] = self._fabric.ssl_config['cert'] return MySQLConnection(**dbconfig) except AttributeError as exc: if counter == attempts: raise InterfaceError( "Connection to MySQL Fabric failed ({0})".format(exc)) _LOGGER.debug( "Retrying {host}:{port}, attempts {counter}".format( host=self.host, port=self.port, counter=counter)) if delay > 0: time.sleep(delay) def connect(self): """Connect with MySQL Fabric""" self._connection = self._get_connection() @property def is_connected(self): """Check whether connection with Fabric is valid Return True if we can still interact with the Fabric server; False if Not. Returns True or False. """ try: return self._connection.is_connected() except AttributeError: return False class MySQLFabricConnection(object): """Connection to a MySQL server through MySQL Fabric""" def __init__(self, **kwargs): """Initialize""" self._mysql_cnx = None self._fabric = None self._fabric_mysql_server = None self._mysql_config = None self._cnx_properties = {} self.reset_properties() # Validity of fabric-argument is checked in config()-method if 'fabric' not in kwargs: raise ValueError("Configuration parameters for Fabric missing") if kwargs: self.store_config(**kwargs) def __getattr__(self, attr): """Return the return value of the MySQLConnection instance""" if attr.startswith('cmd_'): raise NotSupportedError( "Calling {attr} is not supported for connections managed by " "MySQL Fabric.".format(attr=attr)) return getattr(self._mysql_cnx, attr) @property def fabric_uuid(self): """Returns the Fabric UUID of the MySQL server""" if self._fabric_mysql_server: return self._fabric_mysql_server.uuid return None @property def properties(self): """Returns connection properties""" return self._cnx_properties def reset_cache(self, group=None): """Reset cache for this connection's group""" if not group and self._fabric_mysql_server: group = self._fabric_mysql_server.group self._fabric.reset_cache(group=group) def is_connected(self): """Check whether we are connected with the MySQL server Returns True or False """ return self._mysql_cnx is not None def reset_properties(self): """Resets the connection properties This method can be called to reset the connection properties to their default values. """ self._cnx_properties = {} for key, attr in _CNX_PROPERTIES.items(): self._cnx_properties[key] = attr[2] def set_property(self, **properties): """Set one or more connection properties Arguments to the set_property() method will be used as properties. They are validated against the _CNX_PROPERTIES constant. Raise ValueError in case an invalid property is being set. TypeError is raised when the type of the value is not correct. To unset a property, set it to None. """ try: self.close() except Error: # We tried, but it's OK when we fail. pass props = self._cnx_properties for name, value in properties.items(): if name not in _CNX_PROPERTIES: raise ValueError( "Invalid property connection {0}".format(name)) elif value and not isinstance(value, _CNX_PROPERTIES[name][0]): valid_types_str = ' or '.join( [atype.__name__ for atype in _CNX_PROPERTIES[name][0]]) raise TypeError( "{name} is not valid, excepted {typename}".format( name=name, typename=valid_types_str)) if (name == 'group' and value and (props['key'] or props['tables'])): raise ValueError( "'group' property can not be set when 'key' or " "'tables' are set") elif name in ('key', 'tables') and value and props['group']: raise ValueError( "'key' and 'tables' property can not be " "set together with 'group'") elif name == 'scope' and value not in (SCOPE_LOCAL, SCOPE_GLOBAL): raise ValueError("Invalid value for 'scope'") elif name == 'mode' and value not in ( MODE_READWRITE, MODE_READONLY): raise ValueError("Invalid value for 'mode'") if value is None: # Set the default props[name] = _CNX_PROPERTIES[name][2] else: props[name] = value def _configure_fabric(self, config): """Configure the Fabric connection The config argument can be either a dictionary containing the necessary information to setup the connection. Or config can be an instance of Fabric. """ if isinstance(config, Fabric): self._fabric = config else: required_keys = ['host'] for required_key in required_keys: if required_key not in config: raise ValueError( "Missing configuration parameter '{parameter}' " "for fabric".format(parameter=required_key)) host = config['host'] protocol = config.get('protocol', DEFAULT_FABRIC_PROTOCOL) try: port = config.get('port', MYSQL_FABRIC_PORT[protocol]) except KeyError: raise InterfaceError( "{0} protocol is not available".format(protocol)) server_uuid = _fabric_server_uuid(host, port) try: self._fabric = FABRICS[server_uuid] except KeyError: _LOGGER.debug("New Fabric connection") self._fabric = Fabric(**config) self._fabric.seed() # Cache the new connection FABRICS[server_uuid] = self._fabric def store_config(self, **kwargs): """Store configuration of MySQL connections to use with Fabric The configuration found in the dictionary kwargs is used when instanciating a MySQLConnection object. The host and port entries are used to connect to MySQL Fabric. Raises ValueError when the Fabric configuration parameter is not correct or missing; AttributeError is raised when when a paramater is not valid. """ config = kwargs.copy() # Configure the Fabric connection if 'fabric' in config: self._configure_fabric(config['fabric']) del config['fabric'] if 'unix_socket' in config: _LOGGER.warning("MySQL Fabric does not use UNIX sockets.") config['unix_socket'] = None # Try to use the configuration test_config = config.copy() if 'pool_name' in test_config: del test_config['pool_name'] if 'pool_size' in test_config: del test_config['pool_size'] if 'pool_reset_session' in test_config: del test_config['pool_reset_session'] try: pool = MySQLConnectionPool(pool_name=str(uuid.uuid4())) pool.set_config(**test_config) except AttributeError as err: raise AttributeError( "Connection configuration not valid: {0}".format(err)) self._mysql_config = config def _connect(self): """Get a MySQL server based on properties and connect This method gets a MySQL server from MySQL Fabric using already properties set using the set_property() method. You can specify how many times and the delay between trying using attempts and attempt_delay. Raises ValueError when there are problems with arguments or properties; InterfaceError on connectivity errors. """ if self.is_connected(): return props = self._cnx_properties attempts = props['attempts'] attempt_delay = props['attempt_delay'] dbconfig = self._mysql_config.copy() counter = 0 while counter != attempts: counter += 1 try: group = None if props['tables']: if props['scope'] == 'LOCAL' and not props['key']: raise ValueError( "Scope 'LOCAL' needs key property to be set") mysqlserver = self._fabric.get_shard_server( props['tables'], props['key'], scope=props['scope'], mode=props['mode']) elif props['group']: group = props['group'] mysqlserver = self._fabric.get_group_server( group, mode=props['mode']) else: raise ValueError( "Missing group or key and tables properties") except InterfaceError as exc: _LOGGER.debug( "Trying to get MySQL server (attempt {0}; {1})".format( counter, exc)) if counter == attempts: raise InterfaceError("Error getting connection: {0}".format( exc)) if attempt_delay > 0: _LOGGER.debug("Waiting {0}".format(attempt_delay)) time.sleep(attempt_delay) continue # Make sure we do not change the stored configuration dbconfig['host'] = mysqlserver.host dbconfig['port'] = mysqlserver.port try: self._mysql_cnx = mysql.connector.connect(**dbconfig) except Error as exc: if counter == attempts: self.reset_cache(mysqlserver.group) self._fabric.report_failure(mysqlserver.uuid, exc.errno) raise InterfaceError( "Reported faulty server to Fabric ({0})".format(exc)) if attempt_delay > 0: time.sleep(attempt_delay) continue else: self._fabric_mysql_server = mysqlserver break def disconnect(self): """Close connection to MySQL server""" try: self.rollback() self._mysql_cnx.close() except AttributeError: pass # There was no connection except Error: raise finally: self._mysql_cnx = None self._fabric_mysql_server = None close = disconnect def cursor(self, buffered=None, raw=None, prepared=None, cursor_class=None): """Instantiates and returns a cursor This method is similar to MySQLConnection.cursor() except that it checks whether the connection is available and raises an InterfaceError when not. cursor_class argument is not supported and will raise a NotSupportedError exception. Returns a MySQLCursor or subclass. """ self._connect() if cursor_class: raise NotSupportedError( "Custom cursors not supported with MySQL Fabric") if prepared: raise NotSupportedError( "Prepared Statements are not supported with MySQL Fabric") if self._unread_result is True: raise InternalError("Unread result found.") buffered = buffered or self._buffered raw = raw or self._raw cursor_type = 0 if buffered is True: cursor_type |= 1 if raw is True: cursor_type |= 2 types = ( MySQLCursor, # 0 MySQLCursorBuffered, MySQLCursorRaw, MySQLCursorBufferedRaw, ) return (types[cursor_type])(self) def handle_mysql_error(self, exc): """Handles MySQL errors This method takes a mysql.connector.errors.Error exception and checks the error code. Based on the value, it takes certain actions such as clearing the cache. """ if exc.errno in RESET_CACHE_ON_ERROR: self.reset_cache() self.disconnect() raise MySQLFabricError( "Temporary error ({error}); " "retry transaction".format(error=str(exc))) raise exc def commit(self): """Commit current transaction Raises whatever MySQLConnection.commit() raises, but raises MySQLFabricError when MySQL returns error ER_OPTION_PREVENTS_STATEMENT. """ try: self._mysql_cnx.commit() except Error as exc: self.handle_mysql_error(exc) def rollback(self): """Rollback current transaction Raises whatever MySQLConnection.rollback() raises, but raises MySQLFabricError when MySQL returns error ER_OPTION_PREVENTS_STATEMENT. """ try: self._mysql_cnx.rollback() except Error as exc: self.handle_mysql_error(exc) def cmd_query(self, statement): """Send a statement to the MySQL server Raises whatever MySQLConnection.cmd_query() raises, but raises MySQLFabricError when MySQL returns error ER_OPTION_PREVENTS_STATEMENT. Returns a dictionary. """ self._connect() try: return self._mysql_cnx.cmd_query(statement) except Error as exc: self.handle_mysql_error(exc) def cmd_query_iter(self, statements): """Send one or more statements to the MySQL server Raises whatever MySQLConnection.cmd_query_iter() raises, but raises MySQLFabricError when MySQL returns error ER_OPTION_PREVENTS_STATEMENT. Returns a dictionary. """ self._connect() try: return self._mysql_cnx.cmd_query_iter(statements) except Error as exc: self.handle_mysql_error(exc) mysql-utilities-1.6.4/mysql/connector/fabric/balancing.py0000644001577100752670000001141112717544565023255 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Implementing load balancing""" import decimal def _calc_ratio(part, whole): """Calculate ratio Returns int """ return int((part/whole*100).quantize( decimal.Decimal('1'), rounding=decimal.ROUND_HALF_DOWN)) class BaseScheduling(object): """Base class for all scheduling classes dealing with load balancing""" def __init__(self): """Initialize""" self._members = [] self._ratios = [] def set_members(self, *args): """Set members and ratios This methods sets the members using the arguments passed. Each argument must be a sequence where the second item is the weight. The first element is an identifier. For example: ('server1', 0.6), ('server2', 0.8) Setting members means that the load will be reset. If the members are the same as previously set, nothing will be reset or set. If no arguments were given the members will be set to an empty list. Raises ValueError when weight can't be converted to a Decimal. """ raise NotImplementedError def get_next(self): """Returns the next member""" raise NotImplementedError @property def members(self): """Returns the members of this loadbalancer""" return self._members @property def ratios(self): """Returns the ratios for all members""" return self._ratios class WeightedRoundRobin(BaseScheduling): """Class for doing Weighted Round Robin balancing""" def __init__(self, *args): """Initializing""" super(WeightedRoundRobin, self).__init__() self._load = [] self._next_member = 0 self._nr_members = 0 if args: self.set_members(*args) @property def load(self): """Returns the current load""" return self._load def set_members(self, *args): if not args: # Reset members if nothing was given self._members = [] return new_members = [] for member in args: member = list(member) try: member[1] = decimal.Decimal(str(member[1])) except decimal.InvalidOperation: raise ValueError("Member '{member}' is invalid".format( member=member)) new_members.append(tuple(member)) new_members.sort(key=lambda x: x[1], reverse=True) if self._members == new_members: return self._members = new_members self._nr_members = len(new_members) min_weight = min(i[1] for i in self._members) self._ratios = [] for _, weight in self._members: self._ratios.append(int(weight/min_weight * 100)) self.reset() def reset(self): """Reset the load""" self._next_member = 0 self._load = [0] * self._nr_members def get_next(self): """Returns the next member""" if self._ratios == self._load: self.reset() # Figure out the member to return current = self._next_member while self._load[current] == self._ratios[current]: current = (current + 1) % self._nr_members # Update the load and set next member self._load[current] += 1 self._next_member = (current + 1) % self._nr_members # Return current return self._members[current] def __repr__(self): return "{class_}(load={load}, ratios={ratios})".format( class_=self.__class__, load=self.load, ratios=self.ratios ) def __eq__(self, other): return self._members == other.members mysql-utilities-1.6.4/mysql/connector/dbapi.py0000644001577100752670000000443212717544565021175 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2009, 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """ This module implements some constructors and singletons as required by the DB API v2.0 (PEP-249). """ # Python Db API v2 apilevel = '2.0' threadsafety = 1 paramstyle = 'pyformat' import time import datetime from . import constants class _DBAPITypeObject(object): def __init__(self, *values): self.values = values def __eq__(self, other): if other in self.values: return True else: return False def __ne__(self, other): if other in self.values: return False else: return True Date = datetime.date Time = datetime.time Timestamp = datetime.datetime def DateFromTicks(ticks): return Date(*time.localtime(ticks)[:3]) def TimeFromTicks(ticks): return Time(*time.localtime(ticks)[3:6]) def TimestampFromTicks(ticks): return Timestamp(*time.localtime(ticks)[:6]) Binary = bytes STRING = _DBAPITypeObject(*constants.FieldType.get_string_types()) BINARY = _DBAPITypeObject(*constants.FieldType.get_binary_types()) NUMBER = _DBAPITypeObject(*constants.FieldType.get_number_types()) DATETIME = _DBAPITypeObject(*constants.FieldType.get_timestamp_types()) ROWID = _DBAPITypeObject() mysql-utilities-1.6.4/mysql/connector/pooling.py0000644001577100752670000003013512717544565021564 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Implementing pooling of connections to MySQL servers. """ import re from uuid import uuid4 # pylint: disable=F0401 try: import queue except ImportError: # Python v2 import Queue as queue # pylint: enable=F0401 import threading from . import errors from .connection import MySQLConnection CONNECTION_POOL_LOCK = threading.RLock() CNX_POOL_MAXSIZE = 32 CNX_POOL_MAXNAMESIZE = 64 CNX_POOL_NAMEREGEX = re.compile(r'[^a-zA-Z0-9._:\-*$#]') def generate_pool_name(**kwargs): """Generate a pool name This function takes keyword arguments, usually the connection arguments for MySQLConnection, and tries to generate a name for a pool. Raises PoolError when no name can be generated. Returns a string. """ parts = [] for key in ('host', 'port', 'user', 'database'): try: parts.append(str(kwargs[key])) except KeyError: pass if not parts: raise errors.PoolError( "Failed generating pool name; specify pool_name") return '_'.join(parts) class PooledMySQLConnection(object): """Class holding a MySQL Connection in a pool PooledMySQLConnection is used by MySQLConnectionPool to return an instance holding a MySQL connection. It works like a MySQLConnection except for methods like close() and config(). The close()-method will add the connection back to the pool rather than disconnecting from the MySQL server. Configuring the connection have to be done through the MySQLConnectionPool method set_config(). Using config() on pooled connection will raise a PoolError. """ def __init__(self, pool, cnx): """Initialize The pool argument must be an instance of MySQLConnectionPoll. cnx if an instance of MySQLConnection. """ if not isinstance(pool, MySQLConnectionPool): raise AttributeError( "pool should be a MySQLConnectionPool") if not isinstance(cnx, MySQLConnection): raise AttributeError( "cnx should be a MySQLConnection") self._cnx_pool = pool self._cnx = cnx def __getattr__(self, attr): """Calls attributes of the MySQLConnection instance""" return getattr(self._cnx, attr) def close(self): """Do not close, but add connection back to pool The close() method does not close the connection with the MySQL server. The connection is added back to the pool so it can be reused. When the pool is configured to reset the session, the session state will be cleared by re-authenticating the user. """ cnx = self._cnx if self._cnx_pool.reset_session: cnx.reset_session() self._cnx_pool.add_connection(cnx) self._cnx = None def config(self, **kwargs): """Configuration is done through the pool""" raise errors.PoolError( "Configuration for pooled connections should " "be done through the pool itself." ) @property def pool_name(self): """Return the name of the connection pool""" return self._cnx_pool.pool_name class MySQLConnectionPool(object): """Class defining a pool of MySQL connections""" def __init__(self, pool_size=5, pool_name=None, pool_reset_session=True, **kwargs): """Initialize Initialize a MySQL connection pool with a maximum number of connections set to pool_size. The rest of the keywords arguments, kwargs, are configuration arguments for MySQLConnection instances. """ self._pool_size = None self._pool_name = None self._reset_session = pool_reset_session self._set_pool_size(pool_size) self._set_pool_name(pool_name or generate_pool_name(**kwargs)) self._cnx_config = {} self._cnx_queue = queue.Queue(self._pool_size) self._config_version = uuid4() if kwargs: self.set_config(**kwargs) cnt = 0 while cnt < self._pool_size: self.add_connection() cnt += 1 @property def pool_name(self): """Return the name of the connection pool""" return self._pool_name @property def pool_size(self): """Return number of connections managed by the pool""" return self._pool_size @property def reset_session(self): """Return whether to reset session""" return self._reset_session def set_config(self, **kwargs): """Set the connection configuration for MySQLConnection instances This method sets the configuration used for creating MySQLConnection instances. See MySQLConnection for valid connection arguments. Raises PoolError when a connection argument is not valid, missing or not supported by MySQLConnection. """ if not kwargs: return with CONNECTION_POOL_LOCK: try: test_cnx = MySQLConnection() test_cnx.config(**kwargs) self._cnx_config = kwargs self._config_version = uuid4() except AttributeError as err: raise errors.PoolError( "Connection configuration not valid: {0}".format(err)) def _set_pool_size(self, pool_size): """Set the size of the pool This method sets the size of the pool but it will not resize the pool. Raises an AttributeError when the pool_size is not valid. Invalid size is 0, negative or higher than pooling.CNX_POOL_MAXSIZE. """ if pool_size <= 0 or pool_size > CNX_POOL_MAXSIZE: raise AttributeError( "Pool size should be higher than 0 and " "lower or equal to {0}".format(CNX_POOL_MAXSIZE)) self._pool_size = pool_size def _set_pool_name(self, pool_name): r"""Set the name of the pool This method checks the validity and sets the name of the pool. Raises an AttributeError when pool_name contains illegal characters ([^a-zA-Z0-9._\-*$#]) or is longer than pooling.CNX_POOL_MAXNAMESIZE. """ if CNX_POOL_NAMEREGEX.search(pool_name): raise AttributeError( "Pool name '{0}' contains illegal characters".format(pool_name)) if len(pool_name) > CNX_POOL_MAXNAMESIZE: raise AttributeError( "Pool name '{0}' is too long".format(pool_name)) self._pool_name = pool_name def _queue_connection(self, cnx): """Put connection back in the queue This method is putting a connection back in the queue. It will not acquire a lock as the methods using _queue_connection() will have it set. Raises PoolError on errors. """ if not isinstance(cnx, MySQLConnection): raise errors.PoolError( "Connection instance not subclass of MySQLConnection.") try: self._cnx_queue.put(cnx, block=False) except queue.Full: errors.PoolError("Failed adding connection; queue is full") def add_connection(self, cnx=None): """Add a connection to the pool This method instantiates a MySQLConnection using the configuration passed when initializing the MySQLConnectionPool instance or using the set_config() method. If cnx is a MySQLConnection instance, it will be added to the queue. Raises PoolError when no configuration is set, when no more connection can be added (maximum reached) or when the connection can not be instantiated. """ with CONNECTION_POOL_LOCK: if not self._cnx_config: raise errors.PoolError( "Connection configuration not available") if self._cnx_queue.full(): raise errors.PoolError( "Failed adding connection; queue is full") if not cnx: cnx = MySQLConnection(**self._cnx_config) try: if (self._reset_session and self._cnx_config['compress'] and cnx.get_server_version() < (5, 7, 3)): raise errors.NotSupportedError("Pool reset session is " "not supported with " "compression for MySQL " "server version 5.7.2 " "or earlier.") except KeyError: pass # pylint: disable=W0201,W0212 cnx._pool_config_version = self._config_version # pylint: enable=W0201,W0212 else: if not isinstance(cnx, MySQLConnection): raise errors.PoolError( "Connection instance not subclass of MySQLConnection.") self._queue_connection(cnx) def get_connection(self): """Get a connection from the pool This method returns an PooledMySQLConnection instance which has a reference to the pool that created it, and the next available MySQL connection. When the MySQL connection is not connect, a reconnect is attempted. Raises PoolError on errors. Returns a PooledMySQLConnection instance. """ with CONNECTION_POOL_LOCK: try: cnx = self._cnx_queue.get(block=False) except queue.Empty: raise errors.PoolError( "Failed getting connection; pool exhausted") # pylint: disable=W0201,W0212 if not cnx.is_connected() \ or self._config_version != cnx._pool_config_version: cnx.config(**self._cnx_config) try: cnx.reconnect() except errors.InterfaceError: # Failed to reconnect, give connection back to pool self._queue_connection(cnx) raise cnx._pool_config_version = self._config_version # pylint: enable=W0201,W0212 return PooledMySQLConnection(self, cnx) def _remove_connections(self): """Close all connections This method closes all connections. It returns the number of connections it closed. Used mostly for tests. Returns int. """ with CONNECTION_POOL_LOCK: cnt = 0 cnxq = self._cnx_queue while cnxq.qsize(): try: cnx = cnxq.get(block=False) cnx.disconnect() cnt += 1 except queue.Empty: return cnt except errors.PoolError: raise except errors.Error: # Any other error when closing means connection is closed pass return cnt mysql-utilities-1.6.4/mysql/connector/network.py0000644001577100752670000004224112717544565021607 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2012, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Module implementing low-level socket communication with MySQL servers. """ from collections import deque import socket import struct import sys import zlib try: import ssl except: # If import fails, we don't have SSL support. pass from . import constants, errors from .catch23 import PY2, init_bytearray, struct_unpack def _strioerror(err): """Reformat the IOError error message This function reformats the IOError error message. """ if not err.errno: return str(err) return '{errno} {strerr}'.format(errno=err.errno, strerr=err.strerror) def _prepare_packets(buf, pktnr): """Prepare a packet for sending to the MySQL server""" pkts = [] pllen = len(buf) maxpktlen = constants.MAX_PACKET_LENGTH while pllen > maxpktlen: pkts.append(b'\xff\xff\xff' + struct.pack(' 255: self._packet_number = 0 return self._packet_number def open_connection(self): """Open the socket""" raise NotImplementedError def get_address(self): """Get the location of the socket""" raise NotImplementedError def shutdown(self): """Shut down the socket before closing it""" try: self.sock.shutdown(socket.SHUT_RDWR) self.sock.close() del self._packet_queue except (socket.error, AttributeError): pass def close_connection(self): """Close the socket""" try: self.sock.close() del self._packet_queue except (socket.error, AttributeError): pass def send_plain(self, buf, packet_number=None): """Send packets to the MySQL server""" if packet_number is None: self.next_packet_number # pylint: disable=W0104 else: self._packet_number = packet_number packets = _prepare_packets(buf, self._packet_number) for packet in packets: try: if PY2: self.sock.sendall(buffer(packet)) # pylint: disable=E0602 else: self.sock.sendall(packet) except IOError as err: raise errors.OperationalError( errno=2055, values=(self.get_address(), _strioerror(err))) except AttributeError: raise errors.OperationalError(errno=2006) send = send_plain def send_compressed(self, buf, packet_number=None): """Send compressed packets to the MySQL server""" if packet_number is None: self.next_packet_number # pylint: disable=W0104 else: self._packet_number = packet_number pktnr = self._packet_number pllen = len(buf) zpkts = [] maxpktlen = constants.MAX_PACKET_LENGTH if pllen > maxpktlen: pkts = _prepare_packets(buf, pktnr) if PY2: tmpbuf = bytearray() for pkt in pkts: tmpbuf += pkt tmpbuf = buffer(tmpbuf) # pylint: disable=E0602 else: tmpbuf = b''.join(pkts) del pkts seqid = 0 zbuf = zlib.compress(tmpbuf[:16384]) header = (struct.pack(' maxpktlen: zbuf = zlib.compress(tmpbuf[:maxpktlen]) header = (struct.pack(' 50: zbuf = zlib.compress(pkt) zpkts.append(struct.pack(' 0: raise errors.InterfaceError(errno=2013) packet_view = packet_view[read:] rest -= read return packet except IOError as err: raise errors.OperationalError( errno=2055, values=(self.get_address(), _strioerror(err))) def recv_py26_plain(self): """Receive packets from the MySQL server""" try: # Read the header of the MySQL packet, 4 bytes header = bytearray(b'') header_len = 0 while header_len < 4: chunk = self.sock.recv(4 - header_len) if not chunk: raise errors.InterfaceError(errno=2013) header += chunk header_len = len(header) # Save the packet number and payload length self._packet_number = header[3] payload_len = struct_unpack(" 0: chunk = self.sock.recv(rest) if not chunk: raise errors.InterfaceError(errno=2013) payload += chunk rest = payload_len - len(payload) return header + payload except IOError as err: raise errors.OperationalError( errno=2055, values=(self.get_address(), _strioerror(err))) if sys.version_info[0:2] == (2, 6): recv = recv_py26_plain recv_plain = recv_py26_plain else: recv = recv_plain def _split_zipped_payload(self, packet_bunch): """Split compressed payload""" while packet_bunch: payload_length = struct_unpack(", like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Converting MySQL and Python types """ import datetime import time from decimal import Decimal from .constants import FieldType, FieldFlag, CharacterSet from .catch23 import PY2, NUMERIC_TYPES, struct_unpack from .custom_types import HexLiteral class MySQLConverterBase(object): """Base class for conversion classes All class dealing with converting to and from MySQL data types must be a subclass of this class. """ def __init__(self, charset='utf8', use_unicode=True): self.python_types = None self.mysql_types = None self.charset = None self.charset_id = 0 self.use_unicode = None self.set_charset(charset) self.set_unicode(use_unicode) self._cache_field_types = {} def set_charset(self, charset): """Set character set""" if charset == 'utf8mb4': charset = 'utf8' if charset is not None: self.charset = charset else: # default to utf8 self.charset = 'utf8' self.charset_id = CharacterSet.get_charset_info(self.charset)[0] def set_unicode(self, value=True): """Set whether to use Unicode""" self.use_unicode = value def to_mysql(self, value): """Convert Python data type to MySQL""" type_name = value.__class__.__name__.lower() try: return getattr(self, "_{0}_to_mysql".format(type_name))(value) except AttributeError: return value def to_python(self, vtype, value): """Convert MySQL data type to Python""" if (value == b'\x00' or value is None) and vtype[1] != FieldType.BIT: # Don't go further when we hit a NULL value return None if not self._cache_field_types: self._cache_field_types = {} for name, info in FieldType.desc.items(): try: self._cache_field_types[info[0]] = getattr( self, '_{0}_to_python'.format(name)) except AttributeError: # We ignore field types which has no method pass try: return self._cache_field_types[vtype[1]](value, vtype) except KeyError: return value def escape(self, buf): """Escape buffer for sending to MySQL""" return buf def quote(self, buf): """Quote buffer for sending to MySQL""" return str(buf) class MySQLConverter(MySQLConverterBase): """Default conversion class for MySQL Connector/Python. o escape method: for escaping values send to MySQL o quoting method: for quoting values send to MySQL in statements o conversion mapping: maps Python and MySQL data types to function for converting them. Whenever one needs to convert values differently, a converter_class argument can be given while instantiating a new connection like cnx.connect(converter_class=CustomMySQLConverterClass). """ def __init__(self, charset=None, use_unicode=True): MySQLConverterBase.__init__(self, charset, use_unicode) self._cache_field_types = {} def escape(self, value): """ Escapes special characters as they are expected to by when MySQL receives them. As found in MySQL source mysys/charset.c Returns the value if not a string, or the escaped string. """ if value is None: return value elif isinstance(value, NUMERIC_TYPES): return value if isinstance(value, (bytes, bytearray)): value = value.replace(b'\\', b'\\\\') value = value.replace(b'\n', b'\\n') value = value.replace(b'\r', b'\\r') value = value.replace(b'\047', b'\134\047') # single quotes value = value.replace(b'\042', b'\134\042') # double quotes value = value.replace(b'\032', b'\134\032') # for Win32 else: value = value.replace('\\', '\\\\') value = value.replace('\n', '\\n') value = value.replace('\r', '\\r') value = value.replace('\047', '\134\047') # single quotes value = value.replace('\042', '\134\042') # double quotes value = value.replace('\032', '\134\032') # for Win32 return value def quote(self, buf): """ Quote the parameters for commands. General rules: o numbers are returns as bytes using ascii codec o None is returned as bytearray(b'NULL') o Everything else is single quoted '' Returns a bytearray object. """ if isinstance(buf, NUMERIC_TYPES): if PY2: if isinstance(buf, float): return repr(buf) else: return str(buf) else: return str(buf).encode('ascii') elif isinstance(buf, type(None)): return bytearray(b"NULL") else: return bytearray(b"'" + buf + b"'") def to_mysql(self, value): """Convert Python data type to MySQL""" type_name = value.__class__.__name__.lower() try: return getattr(self, "_{0}_to_mysql".format(type_name))(value) except AttributeError: raise TypeError("Python '{0}' cannot be converted to a " "MySQL type".format(type_name)) def to_python(self, vtype, value): """Convert MySQL data type to Python""" if value == 0 and vtype[1] != FieldType.BIT: # \x00 # Don't go further when we hit a NULL value return None if value is None: return None if not self._cache_field_types: self._cache_field_types = {} for name, info in FieldType.desc.items(): try: self._cache_field_types[info[0]] = getattr( self, '_{0}_to_python'.format(name)) except AttributeError: # We ignore field types which has no method pass try: return self._cache_field_types[vtype[1]](value, vtype) except KeyError: # If one type is not defined, we just return the value as str try: return value.decode('utf-8') except UnicodeDecodeError: return value except ValueError as err: raise ValueError("%s (field %s)" % (err, vtype[0])) except TypeError as err: raise TypeError("%s (field %s)" % (err, vtype[0])) except: raise def _int_to_mysql(self, value): """Convert value to int""" return int(value) def _long_to_mysql(self, value): """Convert value to int""" return int(value) def _float_to_mysql(self, value): """Convert value to float""" return float(value) def _str_to_mysql(self, value): """Convert value to string""" if PY2: return str(value) return self._unicode_to_mysql(value) def _unicode_to_mysql(self, value): """Convert unicode""" charset = self.charset charset_id = self.charset_id if charset == 'binary': charset = 'utf8' charset_id = CharacterSet.get_charset_info(charset)[0] encoded = value.encode(charset) if charset_id in CharacterSet.slash_charsets: if b'\x5c' in encoded: return HexLiteral(value, charset) return encoded def _bytes_to_mysql(self, value): """Convert value to bytes""" return value def _bytearray_to_mysql(self, value): """Convert value to bytes""" return str(value) def _bool_to_mysql(self, value): """Convert value to boolean""" if value: return 1 else: return 0 def _nonetype_to_mysql(self, value): """ This would return what None would be in MySQL, but instead we leave it None and return it right away. The actual conversion from None to NULL happens in the quoting functionality. Return None. """ return None def _datetime_to_mysql(self, value): """ Converts a datetime instance to a string suitable for MySQL. The returned string has format: %Y-%m-%d %H:%M:%S[.%f] If the instance isn't a datetime.datetime type, it return None. Returns a bytes. """ if value.microsecond: fmt = '{0:d}-{1:02d}-{2:02d} {3:02d}:{4:02d}:{5:02d}.{6:06d}' return fmt.format( value.year, value.month, value.day, value.hour, value.minute, value.second, value.microsecond).encode('ascii') fmt = '{0:d}-{1:02d}-{2:02d} {3:02d}:{4:02d}:{5:02d}' return fmt.format( value.year, value.month, value.day, value.hour, value.minute, value.second).encode('ascii') def _date_to_mysql(self, value): """ Converts a date instance to a string suitable for MySQL. The returned string has format: %Y-%m-%d If the instance isn't a datetime.date type, it return None. Returns a bytes. """ return '{0:d}-{1:02d}-{2:02d}'.format(value.year, value.month, value.day).encode('ascii') def _time_to_mysql(self, value): """ Converts a time instance to a string suitable for MySQL. The returned string has format: %H:%M:%S[.%f] If the instance isn't a datetime.time type, it return None. Returns a bytes. """ if value.microsecond: return value.strftime('%H:%M:%S.%f').encode('ascii') return value.strftime('%H:%M:%S').encode('ascii') def _struct_time_to_mysql(self, value): """ Converts a time.struct_time sequence to a string suitable for MySQL. The returned string has format: %Y-%m-%d %H:%M:%S Returns a bytes or None when not valid. """ return time.strftime('%Y-%m-%d %H:%M:%S', value).encode('ascii') def _timedelta_to_mysql(self, value): """ Converts a timedelta instance to a string suitable for MySQL. The returned string has format: %H:%M:%S Returns a bytes. """ seconds = abs(value.days * 86400 + value.seconds) if value.microseconds: fmt = '{0:02d}:{1:02d}:{2:02d}.{3:06d}' if value.days < 0: mcs = 1000000 - value.microseconds seconds -= 1 else: mcs = value.microseconds else: fmt = '{0:02d}:{1:02d}:{2:02d}' if value.days < 0: fmt = '-' + fmt (hours, remainder) = divmod(seconds, 3600) (mins, secs) = divmod(remainder, 60) if value.microseconds: result = fmt.format(hours, mins, secs, mcs) else: result = fmt.format(hours, mins, secs) if PY2: return result else: return result.encode('ascii') def _decimal_to_mysql(self, value): """ Converts a decimal.Decimal instance to a string suitable for MySQL. Returns a bytes or None when not valid. """ if isinstance(value, Decimal): return str(value).encode('ascii') return None def row_to_python(self, row, fields): """Convert a MySQL text result row to Python types The row argument is a sequence containing text result returned by a MySQL server. Each value of the row is converted to the using the field type information in the fields argument. Returns a tuple. """ i = 0 result = [None]*len(fields) if not self._cache_field_types: self._cache_field_types = {} for name, info in FieldType.desc.items(): try: self._cache_field_types[info[0]] = getattr( self, '_{0}_to_python'.format(name)) except AttributeError: # We ignore field types which has no method pass for field in fields: field_type = field[1] if (row[i] == 0 and field_type != FieldType.BIT) or row[i] is None: # Don't convert NULL value i += 1 continue try: result[i] = self._cache_field_types[field_type](row[i], field) except KeyError: # If one type is not defined, we just return the value as str try: result[i] = row[i].decode('utf-8') except UnicodeDecodeError: result[i] = row[i] except (ValueError, TypeError) as err: err.message = "{0} (field {1})".format(str(err), field[0]) raise i += 1 return tuple(result) def _FLOAT_to_python(self, value, desc=None): # pylint: disable=C0103 """ Returns value as float type. """ return float(value) _DOUBLE_to_python = _FLOAT_to_python def _INT_to_python(self, value, desc=None): # pylint: disable=C0103 """ Returns value as int type. """ return int(value) _TINY_to_python = _INT_to_python _SHORT_to_python = _INT_to_python _INT24_to_python = _INT_to_python _LONG_to_python = _INT_to_python _LONGLONG_to_python = _INT_to_python def _DECIMAL_to_python(self, value, desc=None): # pylint: disable=C0103 """ Returns value as a decimal.Decimal. """ val = value.decode(self.charset) return Decimal(val) _NEWDECIMAL_to_python = _DECIMAL_to_python def _str(self, value, desc=None): """ Returns value as str type. """ return str(value) def _BIT_to_python(self, value, dsc=None): # pylint: disable=C0103 """Returns BIT columntype as integer""" int_val = value if len(int_val) < 8: int_val = b'\x00' * (8 - len(int_val)) + int_val return struct_unpack('>Q', int_val)[0] def _DATE_to_python(self, value, dsc=None): # pylint: disable=C0103 """ Returns DATE column type as datetime.date type. """ try: parts = value.split(b'-') return datetime.date(int(parts[0]), int(parts[1]), int(parts[2])) except ValueError: return None _NEWDATE_to_python = _DATE_to_python def _TIME_to_python(self, value, dsc=None): # pylint: disable=C0103 """ Returns TIME column type as datetime.time type. """ time_val = None try: (hms, mcs) = value.split(b'.') mcs = int(mcs.ljust(6, b'0')) except ValueError: hms = value mcs = 0 try: (hours, mins, secs) = [int(d) for d in hms.split(b':')] if value[0] == 45 or value[0] == '-': # if PY3 or PY2 mins, secs, mcs = -mins, -secs, -mcs time_val = datetime.timedelta(hours=hours, minutes=mins, seconds=secs, microseconds=mcs) except ValueError: raise ValueError( "Could not convert {0} to python datetime.timedelta".format( value)) else: return time_val def _DATETIME_to_python(self, value, dsc=None): # pylint: disable=C0103 """ Returns DATETIME column type as datetime.datetime type. """ datetime_val = None try: (date_, time_) = value.split(b' ') if len(time_) > 8: (hms, mcs) = time_.split(b'.') mcs = int(mcs.ljust(6, b'0')) else: hms = time_ mcs = 0 dtval = [int(i) for i in date_.split(b'-')] + \ [int(i) for i in hms.split(b':')] + [mcs, ] datetime_val = datetime.datetime(*dtval) except ValueError: datetime_val = None return datetime_val _TIMESTAMP_to_python = _DATETIME_to_python def _YEAR_to_python(self, value, desc=None): # pylint: disable=C0103 """Returns YEAR column type as integer""" try: year = int(value) except ValueError: raise ValueError("Failed converting YEAR to int (%s)" % value) return year def _SET_to_python(self, value, dsc=None): # pylint: disable=C0103 """Returns SET column type as set Actually, MySQL protocol sees a SET as a string type field. So this code isn't called directly, but used by STRING_to_python() method. Returns SET column type as a set. """ set_type = None val = value.decode(self.charset) if not val: return set() try: set_type = set(val.split(',')) except ValueError: raise ValueError("Could not convert set %s to a sequence." % value) return set_type def _STRING_to_python(self, value, dsc=None): # pylint: disable=C0103 """ Note that a SET is a string too, but using the FieldFlag we can see whether we have to split it. Returns string typed columns as string type. """ if dsc is not None: # Check if we deal with a SET if dsc[7] & FieldFlag.SET: return self._SET_to_python(value, dsc) if dsc[7] & FieldFlag.BINARY: return value if self.charset == 'binary': return value if isinstance(value, (bytes, bytearray)) and self.use_unicode: return value.decode(self.charset) return value _VAR_STRING_to_python = _STRING_to_python def _BLOB_to_python(self, value, dsc=None): # pylint: disable=C0103 """Convert BLOB data type to Python""" if dsc is not None: if dsc[7] & FieldFlag.BINARY: if PY2: return value else: return bytes(value) return self._STRING_to_python(value, dsc) _LONG_BLOB_to_python = _BLOB_to_python _MEDIUM_BLOB_to_python = _BLOB_to_python _TINY_BLOB_to_python = _BLOB_to_python mysql-utilities-1.6.4/mysql/connector/connection.py0000644001577100752670000011360712717544565022262 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2009, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Implementing communication with MySQL servers. """ from io import IOBase import os import time from .authentication import get_auth_plugin from .catch23 import PY2, isstr from .constants import ( ClientFlag, ServerCmd, ServerFlag, flag_is_set, ShutdownType, NET_BUFFER_LENGTH ) from . import errors from .conversion import MySQLConverter from .cursor import ( CursorBase, MySQLCursor, MySQLCursorRaw, MySQLCursorBuffered, MySQLCursorBufferedRaw, MySQLCursorPrepared, MySQLCursorDict, MySQLCursorBufferedDict, MySQLCursorNamedTuple, MySQLCursorBufferedNamedTuple) from .network import MySQLUnixSocket, MySQLTCPSocket from .protocol import MySQLProtocol from .utils import int4store from .abstracts import MySQLConnectionAbstract class MySQLConnection(MySQLConnectionAbstract): """Connection to a MySQL Server""" def __init__(self, *args, **kwargs): self._protocol = None self._socket = None self._handshake = None super(MySQLConnection, self).__init__(*args, **kwargs) self._converter_class = MySQLConverter self._client_flags = ClientFlag.get_default() self._charset_id = 33 self._sql_mode = None self._time_zone = None self._autocommit = False self._user = '' self._password = '' self._database = '' self._host = '127.0.0.1' self._port = 3306 self._unix_socket = None self._client_host = '' self._client_port = 0 self._ssl = {} self._force_ipv6 = False self._use_unicode = True self._get_warnings = False self._raise_on_warnings = False self._connection_timeout = None self._buffered = False self._unread_result = False self._have_next_result = False self._raw = False self._in_transaction = False self._prepared_statements = None self._ssl_active = False self._auth_plugin = None self._pool_config_version = None if len(kwargs) > 0: self.connect(**kwargs) def _do_handshake(self): """Get the handshake from the MySQL server""" packet = self._socket.recv() if packet[4] == 255: raise errors.get_exception(packet) self._handshake = None try: handshake = self._protocol.parse_handshake(packet) except Exception as err: raise errors.InterfaceError( 'Failed parsing handshake; {0}'.format(err)) self._server_version = self._check_server_version( handshake['server_version_original']) if handshake['capabilities'] & ClientFlag.PLUGIN_AUTH: self.set_client_flags([ClientFlag.PLUGIN_AUTH]) self._handshake = handshake def _do_auth(self, username=None, password=None, database=None, client_flags=0, charset=33, ssl_options=None): """Authenticate with the MySQL server Authentication happens in two parts. We first send a response to the handshake. The MySQL server will then send either an AuthSwitchRequest or an error packet. Raises NotSupportedError when we get the old, insecure password reply back. Raises any error coming from MySQL. """ self._ssl_active = False if client_flags & ClientFlag.SSL and ssl_options: packet = self._protocol.make_auth_ssl(charset=charset, client_flags=client_flags) self._socket.send(packet) self._socket.switch_to_ssl(**ssl_options) self._ssl_active = True packet = self._protocol.make_auth( handshake=self._handshake, username=username, password=password, database=database, charset=charset, client_flags=client_flags, ssl_enabled=self._ssl_active, auth_plugin=self._auth_plugin) self._socket.send(packet) self._auth_switch_request(username, password) if not (client_flags & ClientFlag.CONNECT_WITH_DB) and database: self.cmd_init_db(database) return True def _auth_switch_request(self, username=None, password=None): """Handle second part of authentication Raises NotSupportedError when we get the old, insecure password reply back. Raises any error coming from MySQL. """ packet = self._socket.recv() if packet[4] == 254 and len(packet) == 5: raise errors.NotSupportedError( "Authentication with old (insecure) passwords " "is not supported. For more information, lookup " "Password Hashing in the latest MySQL manual") elif packet[4] == 254: # AuthSwitchRequest (new_auth_plugin, auth_data) = self._protocol.parse_auth_switch_request(packet) auth = get_auth_plugin(new_auth_plugin)( auth_data, password=password, ssl_enabled=self._ssl_active) response = auth.auth_response() self._socket.send(response) packet = self._socket.recv() if packet[4] != 1: return self._handle_ok(packet) else: auth_data = self._protocol.parse_auth_more_data(packet) elif packet[4] == 255: raise errors.get_exception(packet) def _get_connection(self, prtcls=None): """Get connection based on configuration This method will return the appropriated connection object using the connection parameters. Returns subclass of MySQLBaseSocket. """ conn = None if self.unix_socket and os.name != 'nt': conn = MySQLUnixSocket(unix_socket=self.unix_socket) else: conn = MySQLTCPSocket(host=self.server_host, port=self.server_port, force_ipv6=self._force_ipv6) conn.set_connection_timeout(self._connection_timeout) return conn def _open_connection(self): """Open the connection to the MySQL server This method sets up and opens the connection to the MySQL server. Raises on errors. """ self._protocol = MySQLProtocol() self._socket = self._get_connection() self._socket.open_connection() self._do_handshake() self._do_auth(self._user, self._password, self._database, self._client_flags, self._charset_id, self._ssl) self.set_converter_class(self._converter_class) if self._client_flags & ClientFlag.COMPRESS: self._socket.recv = self._socket.recv_compressed self._socket.send = self._socket.send_compressed def shutdown(self): """Shut down connection to MySQL Server. """ if not self._socket: return try: self._socket.shutdown() except (AttributeError, errors.Error): pass # Getting an exception would mean we are disconnected. def close(self): """Disconnect from the MySQL server""" if not self._socket: return try: self.cmd_quit() self._socket.close_connection() except (AttributeError, errors.Error): pass # Getting an exception would mean we are disconnected. disconnect = close def _send_cmd(self, command, argument=None, packet_number=0, packet=None, expect_response=True): """Send a command to the MySQL server This method sends a command with an optional argument. If packet is not None, it will be sent and the argument will be ignored. The packet_number is optional and should usually not be used. Some commands might not result in the MySQL server returning a response. If a command does not return anything, you should set expect_response to False. The _send_cmd method will then return None instead of a MySQL packet. Returns a MySQL packet or None. """ self.handle_unread_result() try: self._socket.send( self._protocol.make_command(command, packet or argument), packet_number) except AttributeError: raise errors.OperationalError("MySQL Connection not available.") if not expect_response: return None return self._socket.recv() def _send_data(self, data_file, send_empty_packet=False): """Send data to the MySQL server This method accepts a file-like object and sends its data as is to the MySQL server. If the send_empty_packet is True, it will send an extra empty package (for example when using LOAD LOCAL DATA INFILE). Returns a MySQL packet. """ self.handle_unread_result() if not hasattr(data_file, 'read'): raise ValueError("expecting a file-like object") try: buf = data_file.read(NET_BUFFER_LENGTH - 16) while buf: self._socket.send(buf) buf = data_file.read(NET_BUFFER_LENGTH - 16) except AttributeError: raise errors.OperationalError("MySQL Connection not available.") if send_empty_packet: try: self._socket.send(b'') except AttributeError: raise errors.OperationalError( "MySQL Connection not available.") return self._socket.recv() def _handle_server_status(self, flags): """Handle the server flags found in MySQL packets This method handles the server flags send by MySQL OK and EOF packets. It, for example, checks whether there exists more result sets or whether there is an ongoing transaction. """ self._have_next_result = flag_is_set(ServerFlag.MORE_RESULTS_EXISTS, flags) self._in_transaction = flag_is_set(ServerFlag.STATUS_IN_TRANS, flags) @property def in_transaction(self): """MySQL session has started a transaction""" return self._in_transaction def _handle_ok(self, packet): """Handle a MySQL OK packet This method handles a MySQL OK packet. When the packet is found to be an Error packet, an error will be raised. If the packet is neither an OK or an Error packet, errors.InterfaceError will be raised. Returns a dict() """ if packet[4] == 0: ok_pkt = self._protocol.parse_ok(packet) self._handle_server_status(ok_pkt['server_status']) return ok_pkt elif packet[4] == 255: raise errors.get_exception(packet) raise errors.InterfaceError('Expected OK packet') def _handle_eof(self, packet): """Handle a MySQL EOF packet This method handles a MySQL EOF packet. When the packet is found to be an Error packet, an error will be raised. If the packet is neither and OK or an Error packet, errors.InterfaceError will be raised. Returns a dict() """ if packet[4] == 254: eof = self._protocol.parse_eof(packet) self._handle_server_status(eof['status_flag']) return eof elif packet[4] == 255: raise errors.get_exception(packet) raise errors.InterfaceError('Expected EOF packet') def _handle_load_data_infile(self, filename): """Handle a LOAD DATA INFILE LOCAL request""" try: data_file = open(filename, 'rb') except IOError: # Send a empty packet to cancel the operation try: self._socket.send(b'') except AttributeError: raise errors.OperationalError( "MySQL Connection not available.") raise errors.InterfaceError( "File '{0}' could not be read".format(filename)) return self._handle_ok(self._send_data(data_file, send_empty_packet=True)) def _handle_result(self, packet): """Handle a MySQL Result This method handles a MySQL result, for example, after sending the query command. OK and EOF packets will be handled and returned. If the packet is an Error packet, an errors.Error-exception will be raised. The dictionary returned of: - columns: column information - eof: the EOF-packet information Returns a dict() """ if not packet or len(packet) < 4: raise errors.InterfaceError('Empty response') elif packet[4] == 0: return self._handle_ok(packet) elif packet[4] == 251: if PY2: filename = str(packet[5:]) else: filename = packet[5:].decode() return self._handle_load_data_infile(filename) elif packet[4] == 254: return self._handle_eof(packet) elif packet[4] == 255: raise errors.get_exception(packet) # We have a text result set column_count = self._protocol.parse_column_count(packet) if not column_count or not isinstance(column_count, int): raise errors.InterfaceError('Illegal result set.') columns = [None,] * column_count for i in range(0, column_count): columns[i] = self._protocol.parse_column( self._socket.recv(), self.python_charset) eof = self._handle_eof(self._socket.recv()) self.unread_result = True return {'columns': columns, 'eof': eof} def get_row(self, binary=False, columns=None): """Get the next rows returned by the MySQL server This method gets one row from the result set after sending, for example, the query command. The result is a tuple consisting of the row and the EOF packet. If no row was available in the result set, the row data will be None. Returns a tuple. """ (rows, eof) = self.get_rows(count=1, binary=binary, columns=columns) if len(rows): return (rows[0], eof) return (None, eof) def get_rows(self, count=None, binary=False, columns=None): """Get all rows returned by the MySQL server This method gets all rows returned by the MySQL server after sending, for example, the query command. The result is a tuple consisting of a list of rows and the EOF packet. Returns a tuple() """ if not self.unread_result: raise errors.InternalError("No result set available.") try: if binary: rows = self._protocol.read_binary_result( self._socket, columns, count) else: rows = self._protocol.read_text_result(self._socket, count) except errors.Error as err: self.unread_result = False raise err if rows[-1] is not None: self._handle_server_status(rows[-1]['status_flag']) self.unread_result = False return rows def consume_results(self): """Consume results """ if self.unread_result: self.get_rows() def cmd_init_db(self, database): """Change the current database This method changes the current (default) database by sending the INIT_DB command. The result is a dictionary containing the OK packet information. Returns a dict() """ return self._handle_ok( self._send_cmd(ServerCmd.INIT_DB, database.encode('utf-8'))) def cmd_query(self, query, raw=False, buffered=False, raw_as_string=False): """Send a query to the MySQL server This method send the query to the MySQL server and returns the result. If there was a text result, a tuple will be returned consisting of the number of columns and a list containing information about these columns. When the query doesn't return a text result, the OK or EOF packet information as dictionary will be returned. In case the result was an error, exception errors.Error will be raised. Returns a tuple() """ if not isinstance(query, bytes): query = query.encode('utf-8') result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query)) if self._have_next_result: raise errors.InterfaceError( 'Use cmd_query_iter for statements with multiple queries.') return result def cmd_query_iter(self, statements): """Send one or more statements to the MySQL server Similar to the cmd_query method, but instead returns a generator object to iterate through results. It sends the statements to the MySQL server and through the iterator you can get the results. statement = 'SELECT 1; INSERT INTO t1 VALUES (); SELECT 2' for result in cnx.cmd_query(statement, iterate=True): if 'columns' in result: columns = result['columns'] rows = cnx.get_rows() else: # do something useful with INSERT result Returns a generator. """ if not isinstance(statements, bytearray): if isstr(statements): statements = bytearray(statements.encode('utf-8')) else: statements = bytearray(statements) # Handle the first query result yield self._handle_result(self._send_cmd(ServerCmd.QUERY, statements)) # Handle next results, if any while self._have_next_result: self.handle_unread_result() yield self._handle_result(self._socket.recv()) def cmd_refresh(self, options): """Send the Refresh command to the MySQL server This method sends the Refresh command to the MySQL server. The options argument should be a bitwise value using constants.RefreshOption. Usage example: RefreshOption = mysql.connector.RefreshOption refresh = RefreshOption.LOG | RefreshOption.THREADS cnx.cmd_refresh(refresh) The result is a dictionary with the OK packet information. Returns a dict() """ return self._handle_ok( self._send_cmd(ServerCmd.REFRESH, int4store(options))) def cmd_quit(self): """Close the current connection with the server This method sends the QUIT command to the MySQL server, closing the current connection. Since the no response can be returned to the client, cmd_quit() will return the packet it send. Returns a str() """ self.handle_unread_result() packet = self._protocol.make_command(ServerCmd.QUIT) self._socket.send(packet, 0) return packet def cmd_shutdown(self, shutdown_type=None): """Shut down the MySQL Server This method sends the SHUTDOWN command to the MySQL server and is only possible if the current user has SUPER privileges. The result is a dictionary containing the OK packet information. Note: Most applications and scripts do not the SUPER privilege. Returns a dict() """ if shutdown_type: if not ShutdownType.get_info(shutdown_type): raise errors.InterfaceError("Invalid shutdown type") atype = shutdown_type else: atype = ShutdownType.SHUTDOWN_DEFAULT return self._handle_eof(self._send_cmd(ServerCmd.SHUTDOWN, int4store(atype))) def cmd_statistics(self): """Send the statistics command to the MySQL Server This method sends the STATISTICS command to the MySQL server. The result is a dictionary with various statistical information. Returns a dict() """ self.handle_unread_result() packet = self._protocol.make_command(ServerCmd.STATISTICS) self._socket.send(packet, 0) return self._protocol.parse_statistics(self._socket.recv()) def cmd_process_kill(self, mysql_pid): """Kill a MySQL process This method send the PROCESS_KILL command to the server along with the process ID. The result is a dictionary with the OK packet information. Returns a dict() """ return self._handle_ok( self._send_cmd(ServerCmd.PROCESS_KILL, int4store(mysql_pid))) def cmd_debug(self): """Send the DEBUG command This method sends the DEBUG command to the MySQL server, which requires the MySQL user to have SUPER privilege. The output will go to the MySQL server error log and the result of this method is a dictionary with EOF packet information. Returns a dict() """ return self._handle_eof(self._send_cmd(ServerCmd.DEBUG)) def cmd_ping(self): """Send the PING command This method sends the PING command to the MySQL server. It is used to check if the the connection is still valid. The result of this method is dictionary with OK packet information. Returns a dict() """ return self._handle_ok(self._send_cmd(ServerCmd.PING)) def cmd_change_user(self, username='', password='', database='', charset=33): """Change the current logged in user This method allows to change the current logged in user information. The result is a dictionary with OK packet information. Returns a dict() """ self.handle_unread_result() if self._compress: raise errors.NotSupportedError("Change user is not supported with " "compression.") packet = self._protocol.make_change_user( handshake=self._handshake, username=username, password=password, database=database, charset=charset, client_flags=self._client_flags, ssl_enabled=self._ssl_active, auth_plugin=self._auth_plugin) self._socket.send(packet, 0) ok_packet = self._auth_switch_request(username, password) try: if not (self._client_flags & ClientFlag.CONNECT_WITH_DB) \ and database: self.cmd_init_db(database) except: raise self._charset_id = charset self._post_connection() return ok_packet @property def database(self): """Get the current database""" return self.info_query("SELECT DATABASE()")[0] @database.setter def database(self, value): # pylint: disable=W0221 """Set the current database""" self.cmd_query("USE %s" % value) def is_connected(self): """Reports whether the connection to MySQL Server is available This method checks whether the connection to MySQL is available. It is similar to ping(), but unlike the ping()-method, either True or False is returned and no exception is raised. Returns True or False. """ try: self.cmd_ping() except: return False # This method does not raise return True def reset_session(self, user_variables=None, session_variables=None): """Clears the current active session This method resets the session state, if the MySQL server is 5.7.3 or later active session will be reset without re-authenticating. For other server versions session will be reset by re-authenticating. It is possible to provide a sequence of variables and their values to be set after clearing the session. This is possible for both user defined variables and session variables. This method takes two arguments user_variables and session_variables which are dictionaries. Raises OperationalError if not connected, InternalError if there are unread results and InterfaceError on errors. """ if not self.is_connected(): raise errors.OperationalError("MySQL Connection not available.") try: self.cmd_reset_connection() except errors.NotSupportedError: self.cmd_change_user(self._user, self._password, self._database, self._charset_id) cur = self.cursor() if user_variables: for key, value in user_variables.items(): cur.execute("SET @`{0}` = %s".format(key), (value,)) if session_variables: for key, value in session_variables.items(): cur.execute("SET SESSION `{0}` = %s".format(key), (value,)) def reconnect(self, attempts=1, delay=0): """Attempt to reconnect to the MySQL server The argument attempts should be the number of times a reconnect is tried. The delay argument is the number of seconds to wait between each retry. You may want to set the number of attempts higher and use delay when you expect the MySQL server to be down for maintenance or when you expect the network to be temporary unavailable. Raises InterfaceError on errors. """ counter = 0 while counter != attempts: counter = counter + 1 try: self.disconnect() self.connect() if self.is_connected(): break except Exception as err: # pylint: disable=W0703 if counter == attempts: msg = "Can not reconnect to MySQL after {0} "\ "attempt(s): {1}".format(attempts, str(err)) raise errors.InterfaceError(msg) if delay > 0: time.sleep(delay) def ping(self, reconnect=False, attempts=1, delay=0): """Check availability of the MySQL server When reconnect is set to True, one or more attempts are made to try to reconnect to the MySQL server using the reconnect()-method. delay is the number of seconds to wait between each retry. When the connection is not available, an InterfaceError is raised. Use the is_connected()-method if you just want to check the connection without raising an error. Raises InterfaceError on errors. """ try: self.cmd_ping() except: if reconnect: self.reconnect(attempts=attempts, delay=delay) else: raise errors.InterfaceError("Connection to MySQL is" " not available.") @property def connection_id(self): """MySQL connection ID""" try: return self._handshake['server_threadid'] except KeyError: return None def cursor(self, buffered=None, raw=None, prepared=None, cursor_class=None, dictionary=None, named_tuple=None): """Instantiates and returns a cursor By default, MySQLCursor is returned. Depending on the options while connecting, a buffered and/or raw cursor is instantiated instead. Also depending upon the cursor options, rows can be returned as dictionary or named tuple. Dictionary and namedtuple based cursors are available with buffered output but not raw. It is possible to also give a custom cursor through the cursor_class parameter, but it needs to be a subclass of mysql.connector.cursor.CursorBase. Raises ProgrammingError when cursor_class is not a subclass of CursorBase. Raises ValueError when cursor is not available. Returns a cursor-object """ self.handle_unread_result() if not self.is_connected(): raise errors.OperationalError("MySQL Connection not available.") if cursor_class is not None: if not issubclass(cursor_class, CursorBase): raise errors.ProgrammingError( "Cursor class needs be to subclass of cursor.CursorBase") return (cursor_class)(self) buffered = buffered if buffered is not None else self._buffered raw = raw if raw is not None else self._raw cursor_type = 0 if buffered is True: cursor_type |= 1 if raw is True: cursor_type |= 2 if dictionary is True: cursor_type |= 4 if named_tuple is True: cursor_type |= 8 if prepared is True: cursor_type |= 16 types = { 0: MySQLCursor, # 0 1: MySQLCursorBuffered, 2: MySQLCursorRaw, 3: MySQLCursorBufferedRaw, 4: MySQLCursorDict, 5: MySQLCursorBufferedDict, 8: MySQLCursorNamedTuple, 9: MySQLCursorBufferedNamedTuple, 16: MySQLCursorPrepared } try: return (types[cursor_type])(self) except KeyError: args = ('buffered', 'raw', 'dictionary', 'named_tuple', 'prepared') raise ValueError('Cursor not available with given criteria: ' + ', '.join([args[i] for i in range(5) if cursor_type & (1 << i) != 0])) def commit(self): """Commit current transaction""" self._execute_query("COMMIT") def rollback(self): """Rollback current transaction""" if self.unread_result: self.get_rows() self._execute_query("ROLLBACK") def _execute_query(self, query): """Execute a query This method simply calls cmd_query() after checking for unread result. If there are still unread result, an errors.InterfaceError is raised. Otherwise whatever cmd_query() returns is returned. Returns a dict() """ self.handle_unread_result() self.cmd_query(query) def info_query(self, query): """Send a query which only returns 1 row""" cursor = self.cursor(buffered=True) cursor.execute(query) return cursor.fetchone() def _handle_binary_ok(self, packet): """Handle a MySQL Binary Protocol OK packet This method handles a MySQL Binary Protocol OK packet. When the packet is found to be an Error packet, an error will be raised. If the packet is neither an OK or an Error packet, errors.InterfaceError will be raised. Returns a dict() """ if packet[4] == 0: return self._protocol.parse_binary_prepare_ok(packet) elif packet[4] == 255: raise errors.get_exception(packet) raise errors.InterfaceError('Expected Binary OK packet') def _handle_binary_result(self, packet): """Handle a MySQL Result This method handles a MySQL result, for example, after sending the query command. OK and EOF packets will be handled and returned. If the packet is an Error packet, an errors.Error-exception will be raised. The tuple returned by this method consist of: - the number of columns in the result, - a list of tuples with information about the columns, - the EOF packet information as a dictionary. Returns tuple() or dict() """ if not packet or len(packet) < 4: raise errors.InterfaceError('Empty response') elif packet[4] == 0: return self._handle_ok(packet) elif packet[4] == 254: return self._handle_eof(packet) elif packet[4] == 255: raise errors.get_exception(packet) # We have a binary result set column_count = self._protocol.parse_column_count(packet) if not column_count or not isinstance(column_count, int): raise errors.InterfaceError('Illegal result set.') columns = [None] * column_count for i in range(0, column_count): columns[i] = self._protocol.parse_column( self._socket.recv(), self.python_charset) eof = self._handle_eof(self._socket.recv()) return (column_count, columns, eof) def cmd_stmt_prepare(self, statement): """Prepare a MySQL statement This method will send the PREPARE command to MySQL together with the given statement. Returns a dict() """ packet = self._send_cmd(ServerCmd.STMT_PREPARE, statement) result = self._handle_binary_ok(packet) result['columns'] = [] result['parameters'] = [] if result['num_params'] > 0: for _ in range(0, result['num_params']): result['parameters'].append( self._protocol.parse_column(self._socket.recv(), self.python_charset)) self._handle_eof(self._socket.recv()) if result['num_columns'] > 0: for _ in range(0, result['num_columns']): result['columns'].append( self._protocol.parse_column(self._socket.recv(), self.python_charset)) self._handle_eof(self._socket.recv()) return result def cmd_stmt_execute(self, statement_id, data=(), parameters=(), flags=0): """Execute a prepared MySQL statement""" parameters = list(parameters) long_data_used = {} if data: for param_id, _ in enumerate(parameters): if isinstance(data[param_id], IOBase): binary = True try: binary = 'b' not in data[param_id].mode except AttributeError: pass self.cmd_stmt_send_long_data(statement_id, param_id, data[param_id]) long_data_used[param_id] = (binary,) execute_packet = self._protocol.make_stmt_execute( statement_id, data, tuple(parameters), flags, long_data_used, self.charset) packet = self._send_cmd(ServerCmd.STMT_EXECUTE, packet=execute_packet) result = self._handle_binary_result(packet) return result def cmd_stmt_close(self, statement_id): """Deallocate a prepared MySQL statement This method deallocates the prepared statement using the statement_id. Note that the MySQL server does not return anything. """ self._send_cmd(ServerCmd.STMT_CLOSE, int4store(statement_id), expect_response=False) def cmd_stmt_send_long_data(self, statement_id, param_id, data): """Send data for a column This methods send data for a column (for example BLOB) for statement identified by statement_id. The param_id indicate which parameter the data belongs too. The data argument should be a file-like object. Since MySQL does not send anything back, no error is raised. When the MySQL server is not reachable, an OperationalError is raised. cmd_stmt_send_long_data should be called before cmd_stmt_execute. The total bytes send is returned. Returns int. """ chunk_size = 8192 total_sent = 0 # pylint: disable=W0212 prepare_packet = self._protocol._prepare_stmt_send_long_data # pylint: enable=W0212 try: buf = data.read(chunk_size) while buf: packet = prepare_packet(statement_id, param_id, buf) self._send_cmd(ServerCmd.STMT_SEND_LONG_DATA, packet=packet, expect_response=False) total_sent += len(buf) buf = data.read(chunk_size) except AttributeError: raise errors.OperationalError("MySQL Connection not available.") return total_sent def cmd_stmt_reset(self, statement_id): """Reset data for prepared statement sent as long data The result is a dictionary with OK packet information. Returns a dict() """ self._handle_ok(self._send_cmd(ServerCmd.STMT_RESET, int4store(statement_id))) def cmd_reset_connection(self): """Resets the session state without re-authenticating Works only for MySQL server 5.7.3 or later. The result is a dictionary with OK packet information. Returns a dict() """ if self._server_version < (5, 7, 3): raise errors.NotSupportedError("MySQL version 5.7.2 and " "earlier does not support " "COM_RESET_CONNECTION.") self._handle_ok(self._send_cmd(ServerCmd.RESET_CONNECTION)) self._post_connection() def handle_unread_result(self): """Check whether there is an unread result""" if self.can_consume_results: self.consume_results() elif self.unread_result: raise errors.InternalError("Unread result found") mysql-utilities-1.6.4/mysql/connector/version.py0000644001577100752670000000303012717544565021574 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2012, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """MySQL Connector/Python version information The file version.py gets installed and is available after installation as mysql.connector.version. """ VERSION = (2, 1, 3, '', 0) if VERSION[3] and VERSION[4]: VERSION_TEXT = '{0}.{1}.{2}{3}{4}'.format(*VERSION) else: VERSION_TEXT = '{0}.{1}.{2}'.format(*VERSION[0:3]) LICENSE = 'GPLv2 with FOSS License Exception' EDITION = '' # Added in package names, after the version mysql-utilities-1.6.4/mysql/connector/connection_cext.py0000644001577100752670000005040412717544565023300 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Connection class using the C Extension """ # Detection of abstract methods in pylint is not working correctly #pylint: disable=W0223 from . import errors from .catch23 import INT_TYPES from .constants import ( CharacterSet, FieldFlag, ServerFlag, ShutdownType, ClientFlag ) from .abstracts import MySQLConnectionAbstract, MySQLCursorAbstract from .protocol import MySQLProtocol HAVE_CMYSQL = False try: import _mysql_connector # pylint: disable=F0401 from .cursor_cext import ( CMySQLCursor, CMySQLCursorRaw, CMySQLCursorBuffered, CMySQLCursorBufferedRaw, CMySQLCursorPrepared, CMySQLCursorDict, CMySQLCursorBufferedDict, CMySQLCursorNamedTuple, CMySQLCursorBufferedNamedTuple) from _mysql_connector import MySQLInterfaceError # pylint: disable=F0401 except ImportError as exc: raise ImportError( "MySQL Connector/Python C Extension not available ({0})".format( str(exc) )) else: HAVE_CMYSQL = True class CMySQLConnection(MySQLConnectionAbstract): """Class initiating a MySQL Connection using Connector/C""" def __init__(self, **kwargs): """Initialization""" if not HAVE_CMYSQL: raise RuntimeError( "MySQL Connector/Python C Extension not available") self._cmysql = None self._connection_timeout = 2 self._columns = [] self.converter = None super(CMySQLConnection, self).__init__(**kwargs) if len(kwargs) > 0: self.connect(**kwargs) def _do_handshake(self): """Gather information of the MySQL server before authentication""" self._handshake = { 'protocol': self._cmysql.get_proto_info(), 'server_version_original': self._cmysql.get_server_info(), 'server_threadid': self._cmysql.thread_id(), 'charset': None, 'server_status': None, 'auth_plugin': None, 'auth_data': None, 'capabilities': self._cmysql.st_server_capabilities(), } self._server_version = self._check_server_version( self._handshake['server_version_original'] ) @property def _server_status(self): """Returns the server status attribute of MYSQL structure""" return self._cmysql.st_server_status() def set_unicode(self, value=True): """Toggle unicode mode Set whether we return string fields as unicode or not. Default is True. """ self._use_unicode = value if self._cmysql: self._cmysql.use_unicode(value) if self.converter: self.converter.set_unicode(value) @property def autocommit(self): """Get whether autocommit is on or off""" value = self.info_query("SELECT @@session.autocommit")[0] return True if value == 1 else False @autocommit.setter def autocommit(self, value): # pylint: disable=W0221 """Toggle autocommit""" try: self._cmysql.autocommit(value) self._autocommit = value except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) @property def database(self): """Get the current database""" return self.info_query("SELECT DATABASE()")[0] @database.setter def database(self, value): # pylint: disable=W0221 """Set the current database""" self._cmysql.select_db(value) @property def in_transaction(self): """MySQL session has started a transaction""" return self._server_status & ServerFlag.STATUS_IN_TRANS def _open_connection(self): charset_name = CharacterSet.get_info(self._charset_id)[0] self._cmysql = _mysql_connector.MySQL( buffered=self._buffered, raw=self._raw, charset_name=charset_name, connection_timeout=int(self._connection_timeout or 10), use_unicode=self._use_unicode, auth_plugin=self._auth_plugin) cnx_kwargs = { 'host': self._host, 'user': self._user, 'password': self._password, 'database': self._database, 'port': self._port, 'client_flags': self._client_flags, 'unix_socket': self._unix_socket, 'compress': self.isset_client_flag(ClientFlag.COMPRESS) } if self.isset_client_flag(ClientFlag.SSL): cnx_kwargs.update({ 'ssl_ca': self._ssl['ca'], 'ssl_cert': self._ssl['cert'], 'ssl_key': self._ssl['key'], 'ssl_verify_cert': self._ssl['verify_cert'] }) try: self._cmysql.connect(**cnx_kwargs) except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) self._do_handshake() def close(self): """Disconnect from the MySQL server""" if self._cmysql: try: self._cmysql.close() except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) self._cmysql = None disconnect = close def is_connected(self): """Reports whether the connection to MySQL Server is available""" if self._cmysql: return self._cmysql.ping() return False def ping(self, reconnect=False, attempts=1, delay=0): """Check availability of the MySQL server When reconnect is set to True, one or more attempts are made to try to reconnect to the MySQL server using the reconnect()-method. delay is the number of seconds to wait between each retry. When the connection is not available, an InterfaceError is raised. Use the is_connected()-method if you just want to check the connection without raising an error. Raises InterfaceError on errors. """ errmsg = "Connection to MySQL is not available" try: connected = self._cmysql.ping() except AttributeError: pass # Raise or reconnect later else: if connected: return if reconnect: self.reconnect(attempts=attempts, delay=delay) else: raise errors.InterfaceError(errmsg) def set_character_set_name(self, charset): """Sets the default character set name for current connection. """ self._cmysql.set_character_set(charset) def info_query(self, query): """Send a query which only returns 1 row""" self._cmysql.query(query) first_row = () if self._cmysql.have_result_set: first_row = self._cmysql.fetch_row() if self._cmysql.fetch_row(): self._cmysql.free_result() raise errors.InterfaceError( "Query should not return more than 1 row") self._cmysql.free_result() return first_row @property def connection_id(self): """MySQL connection ID""" try: return self._cmysql.thread_id() except MySQLInterfaceError: pass # Just return None return None def get_rows(self, count=None, binary=False, columns=None): """Get all or a subset of rows returned by the MySQL server""" if not (self._cmysql and self.unread_result): raise errors.InternalError("No result set available") rows = [] if count is not None and count <= 0: raise AttributeError("count should be 1 or higher, or None") counter = 0 try: row = self._cmysql.fetch_row() while row: if self.converter: row = list(row) for i, _ in enumerate(row): row[i] = self.converter.to_python(self._columns[i], row[i]) row = tuple(row) rows.append(row) counter += 1 if count and counter == count: break row = self._cmysql.fetch_row() except MySQLInterfaceError as exc: self.free_result() raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) return rows def get_row(self, binary=False, columns=None): """Get the next rows returned by the MySQL server""" try: return self.get_rows(count=1, binary=binary, columns=columns)[0] except IndexError: # No row available return None def next_result(self): """Reads the next result""" if self._cmysql: self._cmysql.consume_result() return self._cmysql.next_result() return None def free_result(self): """Frees the result""" if self._cmysql: self._cmysql.free_result() def commit(self): """Commit current transaction""" if self._cmysql: self._cmysql.commit() def rollback(self): """Rollback current transaction""" if self._cmysql: self._cmysql.consume_result() self._cmysql.rollback() def cmd_init_db(self, database): """Change the current database""" try: self._cmysql.select_db(database) except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) def fetch_eof_columns(self): """Fetch EOF and column information""" if not self._cmysql.have_result_set: raise errors.InterfaceError("No result set") fields = self._cmysql.fetch_fields() self._columns = [] for col in fields: self._columns.append(( col[4], int(col[8]), None, None, None, None, ~int(col[9]) & FieldFlag.NOT_NULL, int(col[9]) )) return { 'eof': { 'status_flag': self._server_status, 'warning_count': self._cmysql.st_warning_count(), }, 'columns': self._columns, } def fetch_eof_status(self): """Fetch EOF and status information""" if self._cmysql: return { 'warning_count': self._cmysql.st_warning_count(), 'field_count': self._cmysql.st_field_count(), 'insert_id': self._cmysql.insert_id(), 'affected_rows': self._cmysql.affected_rows(), 'server_status': self._server_status, } return None def cmd_query(self, query, raw=False, buffered=False, raw_as_string=False): """Send a query to the MySQL server""" self.handle_unread_result() try: if not isinstance(query, bytes): query = query.encode('utf-8') self._cmysql.query(query, raw=raw, buffered=buffered, raw_as_string=raw_as_string) except MySQLInterfaceError as exc: raise errors.get_mysql_exception(exc.errno, msg=exc.msg, sqlstate=exc.sqlstate) except AttributeError: if self._unix_socket: addr = self._unix_socket else: addr = self._host + ':' + str(self._port) raise errors.OperationalError( errno=2055, values=(addr, 'Connection not available.')) self._columns = [] if not self._cmysql.have_result_set: # No result return self.fetch_eof_status() return self.fetch_eof_columns() _execute_query = cmd_query def cursor(self, buffered=None, raw=None, prepared=None, cursor_class=None, dictionary=None, named_tuple=None): """Instantiates and returns a cursor using C Extension By default, CMySQLCursor is returned. Depending on the options while connecting, a buffered and/or raw cursor is instantiated instead. Also depending upon the cursor options, rows can be returned as dictionary or named tuple. Dictionary and namedtuple based cursors are available with buffered output but not raw. It is possible to also give a custom cursor through the cursor_class parameter, but it needs to be a subclass of mysql.connector.cursor_cext.CMySQLCursor. Raises ProgrammingError when cursor_class is not a subclass of CursorBase. Raises ValueError when cursor is not available. Returns instance of CMySQLCursor or subclass. :param buffered: Return a buffering cursor :param raw: Return a raw cursor :param prepared: Return a cursor which uses prepared statements :param cursor_class: Use a custom cursor class :param dictionary: Rows are returned as dictionary :param named_tuple: Rows are returned as named tuple :return: Subclass of CMySQLCursor :rtype: CMySQLCursor or subclass """ self.handle_unread_result() if not self.is_connected(): raise errors.OperationalError("MySQL Connection not available.") if cursor_class is not None: if not issubclass(cursor_class, MySQLCursorAbstract): raise errors.ProgrammingError( "Cursor class needs be to subclass" " of cursor_cext.CMySQLCursor") return (cursor_class)(self) buffered = buffered or self._buffered raw = raw or self._raw cursor_type = 0 if buffered is True: cursor_type |= 1 if raw is True: cursor_type |= 2 if dictionary is True: cursor_type |= 4 if named_tuple is True: cursor_type |= 8 if prepared is True: cursor_type |= 16 types = { 0: CMySQLCursor, # 0 1: CMySQLCursorBuffered, 2: CMySQLCursorRaw, 3: CMySQLCursorBufferedRaw, 4: CMySQLCursorDict, 5: CMySQLCursorBufferedDict, 8: CMySQLCursorNamedTuple, 9: CMySQLCursorBufferedNamedTuple, 16: CMySQLCursorPrepared } try: return (types[cursor_type])(self) except KeyError: args = ('buffered', 'raw', 'dictionary', 'named_tuple', 'prepared') raise ValueError('Cursor not available with given criteria: ' + ', '.join([args[i] for i in range(5) if cursor_type & (1 << i) != 0])) @property def num_rows(self): """Returns number of rows of current result set""" if not self._cmysql.have_result_set: raise errors.InterfaceError("No result set") return self._cmysql.num_rows() @property def warning_count(self): """Returns number of warnings""" if not self._cmysql: return 0 return self._cmysql.warning_count() @property def result_set_available(self): """Check if a result set is available""" if not self._cmysql: return False return self._cmysql.have_result_set @property def unread_result(self): """Check if there are unread results or rows""" return self.result_set_available @property def more_results(self): """Check if there are more results""" return self._cmysql.more_results() def prepare_for_mysql(self, params): """Prepare parameters for statements This method is use by cursors to prepared parameters found in the list (or tuple) params. Returns dict. """ if isinstance(params, (list, tuple)): result = self._cmysql.convert_to_mysql(*params) elif isinstance(params, dict): result = {} for key, value in params.items(): result[key] = self._cmysql.convert_to_mysql(value)[0] else: raise ValueError("Could not process parameters") return result def consume_results(self): """Consume the current result This method consume the result by reading (consuming) all rows. """ self._cmysql.consume_result() def cmd_change_user(self, username='', password='', database='', charset=33): """Change the current logged in user""" try: self._cmysql.change_user(username, password, database) except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) self._charset_id = charset self._post_connection() def cmd_refresh(self, options): """Send the Refresh command to the MySQL server""" try: self._cmysql.refresh(options) except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) return self.fetch_eof_status() def cmd_quit(self): """Close the current connection with the server""" self.close() def cmd_shutdown(self, shutdown_type=None): """Shut down the MySQL Server""" if not self._cmysql: raise errors.OperationalError("MySQL Connection not available") if shutdown_type: if not ShutdownType.get_info(shutdown_type): raise errors.InterfaceError("Invalid shutdown type") level = shutdown_type else: level = ShutdownType.SHUTDOWN_DEFAULT try: self._cmysql.shutdown(level) except MySQLInterfaceError as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) self.close() def cmd_statistics(self): """Return statistics from the MySQL server""" self.handle_unread_result() try: stat = self._cmysql.stat() return MySQLProtocol().parse_statistics(stat, with_header=False) except (MySQLInterfaceError, errors.InterfaceError) as exc: raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, sqlstate=exc.sqlstate) def cmd_process_kill(self, mysql_pid): """Kill a MySQL process""" if not isinstance(mysql_pid, INT_TYPES): raise ValueError("MySQL PID must be int") self.info_query("KILL {0}".format(mysql_pid)) def handle_unread_result(self): """Check whether there is an unread result""" if self.can_consume_results: self.consume_results() elif self.unread_result: raise errors.InternalError("Unread result found") mysql-utilities-1.6.4/mysql/connector/cursor.py0000644001577100752670000012537412717544565021444 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2009, 2015, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Cursor classes """ from collections import namedtuple import re import weakref from . import errors from .abstracts import MySQLCursorAbstract from .catch23 import PY2 SQL_COMMENT = r"\/\*.*?\*\/" RE_SQL_COMMENT = re.compile( r'''({0})|(["'`][^"'`]*?({0})[^"'`]*?["'`])'''.format(SQL_COMMENT), re.I | re.M | re.S) RE_SQL_ON_DUPLICATE = re.compile( r'''\s*ON\s+DUPLICATE\s+KEY(?:[^"'`]*["'`][^"'`]*["'`])*[^"'`]*$''', re.I | re.M | re.S) RE_SQL_INSERT_STMT = re.compile( r"({0}|\s)*INSERT({0}|\s)*INTO.+VALUES.*".format(SQL_COMMENT), re.I | re.M | re.S) RE_SQL_INSERT_VALUES = re.compile(r'.*VALUES\s*(\(.*\)).*', re.I | re.M | re.S) RE_PY_PARAM = re.compile(b'(%s)') RE_SQL_SPLIT_STMTS = re.compile( b''';(?=(?:[^"'`]*["'`][^"'`]*["'`])*[^"'`]*$)''') RE_SQL_FIND_PARAM = re.compile( b'''%s(?=(?:[^"'`]*["'`][^"'`]*["'`])*[^"'`]*$)''') ERR_NO_RESULT_TO_FETCH = "No result set to fetch from" class _ParamSubstitutor(object): """ Substitutes parameters into SQL statement. """ def __init__(self, params): self.params = params self.index = 0 def __call__(self, matchobj): index = self.index self.index += 1 try: return bytes(self.params[index]) except IndexError: raise errors.ProgrammingError( "Not enough parameters for the SQL statement") @property def remaining(self): """Returns number of parameters remaining to be substituted""" return len(self.params) - self.index class CursorBase(MySQLCursorAbstract): """ Base for defining MySQLCursor. This class is a skeleton and defines methods and members as required for the Python Database API Specification v2.0. It's better to inherite from MySQLCursor. """ _raw = False def __init__(self): self._description = None self._rowcount = -1 self._last_insert_id = None self.arraysize = 1 super(CursorBase, self).__init__() def callproc(self, procname, args=()): """Calls a stored procedue with the given arguments The arguments will be set during this session, meaning they will be called like ___arg where is an enumeration (+1) of the arguments. Coding Example: 1) Definining the Stored Routine in MySQL: CREATE PROCEDURE multiply(IN pFac1 INT, IN pFac2 INT, OUT pProd INT) BEGIN SET pProd := pFac1 * pFac2; END 2) Executing in Python: args = (5,5,0) # 0 is to hold pprod cursor.callproc('multiply', args) print(cursor.fetchone()) Does not return a value, but a result set will be available when the CALL-statement execute successfully. Raises exceptions when something is wrong. """ pass def close(self): """Close the cursor.""" pass def execute(self, operation, params=(), multi=False): """Executes the given operation Executes the given operation substituting any markers with the given parameters. For example, getting all rows where id is 5: cursor.execute("SELECT * FROM t1 WHERE id = %s", (5,)) The multi argument should be set to True when executing multiple statements in one operation. If not set and multiple results are found, an InterfaceError will be raised. If warnings where generated, and connection.get_warnings is True, then self._warnings will be a list containing these warnings. Returns an iterator when multi is True, otherwise None. """ pass def executemany(self, operation, seqparams): """Execute the given operation multiple times The executemany() method will execute the operation iterating over the list of parameters in seq_params. Example: Inserting 3 new employees and their phone number data = [ ('Jane','555-001'), ('Joe', '555-001'), ('John', '555-003') ] stmt = "INSERT INTO employees (name, phone) VALUES ('%s','%s')" cursor.executemany(stmt, data) INSERT statements are optimized by batching the data, that is using the MySQL multiple rows syntax. Results are discarded. If they are needed, consider looping over data using the execute() method. """ pass def fetchone(self): """Returns next row of a query result set Returns a tuple or None. """ pass def fetchmany(self, size=1): """Returns the next set of rows of a query result, returning a list of tuples. When no more rows are available, it returns an empty list. The number of rows returned can be specified using the size argument, which defaults to one """ pass def fetchall(self): """Returns all rows of a query result set Returns a list of tuples. """ pass def nextset(self): """Not Implemented.""" pass def setinputsizes(self, sizes): """Not Implemented.""" pass def setoutputsize(self, size, column=None): """Not Implemented.""" pass def reset(self, free=True): """Reset the cursor to default""" pass @property def description(self): """Returns description of columns in a result This property returns a list of tuples describing the columns in in a result set. A tuple is described as follows:: (column_name, type, None, None, None, None, null_ok, column_flags) # Addition to PEP-249 specs Returns a list of tuples. """ return self._description @property def rowcount(self): """Returns the number of rows produced or affected This property returns the number of rows produced by queries such as a SELECT, or affected rows when executing DML statements like INSERT or UPDATE. Note that for non-buffered cursors it is impossible to know the number of rows produced before having fetched them all. For those, the number of rows will be -1 right after execution, and incremented when fetching rows. Returns an integer. """ return self._rowcount @property def lastrowid(self): """Returns the value generated for an AUTO_INCREMENT column Returns the value generated for an AUTO_INCREMENT column by the previous INSERT or UPDATE statement or None when there is no such value available. Returns a long value or None. """ return self._last_insert_id class MySQLCursor(CursorBase): """Default cursor for interacting with MySQL This cursor will execute statements and handle the result. It will not automatically fetch all rows. MySQLCursor should be inherited whenever other functionallity is required. An example would to change the fetch* member functions to return dictionaries instead of lists of values. Implements the Python Database API Specification v2.0 (PEP-249) """ def __init__(self, connection=None): CursorBase.__init__(self) self._connection = None self._stored_results = [] self._nextrow = (None, None) self._warnings = None self._warning_count = 0 self._executed = None self._executed_list = [] self._binary = False if connection is not None: self._set_connection(connection) def __iter__(self): """ Iteration over the result set which calls self.fetchone() and returns the next row. """ return iter(self.fetchone, None) def _set_connection(self, connection): """Set the connection""" try: self._connection = weakref.proxy(connection) self._connection.is_connected() except (AttributeError, TypeError): raise errors.InterfaceError(errno=2048) def _reset_result(self): """Reset the cursor to default""" self._rowcount = -1 self._nextrow = (None, None) self._stored_results = [] self._warnings = None self._warning_count = 0 self._description = None self._executed = None self._executed_list = [] self.reset() def _have_unread_result(self): """Check whether there is an unread result""" try: return self._connection.unread_result except AttributeError: return False def next(self): """Used for iterating over the result set.""" return self.__next__() def __next__(self): """ Used for iterating over the result set. Calles self.fetchone() to get the next row. """ try: row = self.fetchone() except errors.InterfaceError: raise StopIteration if not row: raise StopIteration return row def close(self): """Close the cursor Returns True when successful, otherwise False. """ if self._connection is None: return False self._connection.handle_unread_result() self._reset_result() self._connection = None return True def _process_params_dict(self, params): """Process query parameters given as dictionary""" try: to_mysql = self._connection.converter.to_mysql escape = self._connection.converter.escape quote = self._connection.converter.quote res = {} for key, value in list(params.items()): conv = value conv = to_mysql(conv) conv = escape(conv) conv = quote(conv) if PY2: res["%({0})s".format(key)] = conv else: res["%({0})s".format(key).encode()] = conv except Exception as err: raise errors.ProgrammingError( "Failed processing pyformat-parameters; %s" % err) else: return res def _process_params(self, params): """Process query parameters.""" try: res = params to_mysql = self._connection.converter.to_mysql escape = self._connection.converter.escape quote = self._connection.converter.quote res = [to_mysql(i) for i in res] res = [escape(i) for i in res] res = [quote(i) for i in res] except Exception as err: raise errors.ProgrammingError( "Failed processing format-parameters; %s" % err) else: return tuple(res) def _handle_noresultset(self, res): """Handles result of execute() when there is no result set """ try: self._rowcount = res['affected_rows'] self._last_insert_id = res['insert_id'] self._warning_count = res['warning_count'] except (KeyError, TypeError) as err: raise errors.ProgrammingError( "Failed handling non-resultset; {0}".format(err)) self._handle_warnings() if self._connection.raise_on_warnings is True and self._warnings: raise errors.get_mysql_exception( self._warnings[0][1], self._warnings[0][2]) def _handle_resultset(self): """Handles result set This method handles the result set and is called after reading and storing column information in _handle_result(). For non-buffering cursors, this method is usually doing nothing. """ pass def _handle_result(self, result): """ Handle the result after a command was send. The result can be either an OK-packet or a dictionary containing column/eof information. Raises InterfaceError when result is not a dict() or result is invalid. """ if not isinstance(result, dict): raise errors.InterfaceError('Result was not a dict()') if 'columns' in result: # Weak test, must be column/eof information self._description = result['columns'] self._connection.unread_result = True self._handle_resultset() elif 'affected_rows' in result: # Weak test, must be an OK-packet self._connection.unread_result = False self._handle_noresultset(result) else: raise errors.InterfaceError('Invalid result') def _execute_iter(self, query_iter): """Generator returns MySQLCursor objects for multiple statements This method is only used when multiple statements are executed by the execute() method. It uses zip() to make an iterator from the given query_iter (result of MySQLConnection.cmd_query_iter()) and the list of statements that were executed. """ executed_list = RE_SQL_SPLIT_STMTS.split(self._executed) i = 0 while True: result = next(query_iter) self._reset_result() self._handle_result(result) try: self._executed = executed_list[i].strip() i += 1 except IndexError: self._executed = executed_list[0] yield self def execute(self, operation, params=None, multi=False): """Executes the given operation Executes the given operation substituting any markers with the given parameters. For example, getting all rows where id is 5: cursor.execute("SELECT * FROM t1 WHERE id = %s", (5,)) The multi argument should be set to True when executing multiple statements in one operation. If not set and multiple results are found, an InterfaceError will be raised. If warnings where generated, and connection.get_warnings is True, then self._warnings will be a list containing these warnings. Returns an iterator when multi is True, otherwise None. """ if not operation: return None if not self._connection: raise errors.ProgrammingError("Cursor is not connected") self._connection.handle_unread_result() self._reset_result() stmt = '' try: if not isinstance(operation, (bytes, bytearray)): stmt = operation.encode(self._connection.python_charset) else: stmt = operation except (UnicodeDecodeError, UnicodeEncodeError) as err: raise errors.ProgrammingError(str(err)) if params is not None: if isinstance(params, dict): for key, value in self._process_params_dict(params).items(): stmt = stmt.replace(key, value) elif isinstance(params, (list, tuple)): psub = _ParamSubstitutor(self._process_params(params)) stmt = RE_PY_PARAM.sub(psub, stmt) if psub.remaining != 0: raise errors.ProgrammingError( "Not all parameters were used in the SQL statement") self._executed = stmt if multi: self._executed_list = [] return self._execute_iter(self._connection.cmd_query_iter(stmt)) else: try: self._handle_result(self._connection.cmd_query(stmt)) except errors.InterfaceError: if self._connection._have_next_result: # pylint: disable=W0212 raise errors.InterfaceError( "Use multi=True when executing multiple statements") raise return None def _batch_insert(self, operation, seq_params): """Implements multi row insert""" def remove_comments(match): """Remove comments from INSERT statements. This function is used while removing comments from INSERT statements. If the matched string is a comment not enclosed by quotes, it returns an empty string, else the string itself. """ if match.group(1): return "" else: return match.group(2) tmp = re.sub(RE_SQL_ON_DUPLICATE, '', re.sub(RE_SQL_COMMENT, remove_comments, operation)) matches = re.search(RE_SQL_INSERT_VALUES, tmp) if not matches: raise errors.InterfaceError( "Failed rewriting statement for multi-row INSERT. " "Check SQL syntax." ) fmt = matches.group(1).encode(self._connection.charset) values = [] try: stmt = operation.encode(self._connection.charset) for params in seq_params: tmp = fmt if isinstance(params, dict): for key, value in self._process_params_dict(params).items(): tmp = tmp.replace(key, value) else: psub = _ParamSubstitutor(self._process_params(params)) tmp = RE_PY_PARAM.sub(psub, tmp) if psub.remaining != 0: raise errors.ProgrammingError( "Not all parameters were used in the SQL statement") #for p in self._process_params(params): # tmp = tmp.replace(b'%s',p,1) values.append(tmp) if fmt in stmt: stmt = stmt.replace(fmt, b','.join(values), 1) self._executed = stmt return stmt else: return None except (UnicodeDecodeError, UnicodeEncodeError) as err: raise errors.ProgrammingError(str(err)) except errors.Error: raise except Exception as err: raise errors.InterfaceError( "Failed executing the operation; %s" % err) def executemany(self, operation, seq_params): """Execute the given operation multiple times The executemany() method will execute the operation iterating over the list of parameters in seq_params. Example: Inserting 3 new employees and their phone number data = [ ('Jane','555-001'), ('Joe', '555-001'), ('John', '555-003') ] stmt = "INSERT INTO employees (name, phone) VALUES ('%s','%s)" cursor.executemany(stmt, data) INSERT statements are optimized by batching the data, that is using the MySQL multiple rows syntax. Results are discarded. If they are needed, consider looping over data using the execute() method. """ if not operation or not seq_params: return None self._connection.handle_unread_result() try: _ = iter(seq_params) except TypeError: raise errors.ProgrammingError( "Parameters for query must be an Iterable.") # Optimize INSERTs by batching them if re.match(RE_SQL_INSERT_STMT, operation): if not seq_params: self._rowcount = 0 return stmt = self._batch_insert(operation, seq_params) if stmt is not None: return self.execute(stmt) rowcnt = 0 try: for params in seq_params: self.execute(operation, params) if self.with_rows and self._have_unread_result(): self.fetchall() rowcnt += self._rowcount except (ValueError, TypeError) as err: raise errors.InterfaceError( "Failed executing the operation; {0}".format(err)) except: # Raise whatever execute() raises raise self._rowcount = rowcnt def stored_results(self): """Returns an iterator for stored results This method returns an iterator over results which are stored when callproc() is called. The iterator will provide MySQLCursorBuffered instances. Returns a iterator. """ return iter(self._stored_results) def callproc(self, procname, args=()): """Calls a stored procedure with the given arguments The arguments will be set during this session, meaning they will be called like ___arg where is an enumeration (+1) of the arguments. Coding Example: 1) Defining the Stored Routine in MySQL: CREATE PROCEDURE multiply(IN pFac1 INT, IN pFac2 INT, OUT pProd INT) BEGIN SET pProd := pFac1 * pFac2; END 2) Executing in Python: args = (5, 5, 0) # 0 is to hold pprod cursor.callproc('multiply', args) print(cursor.fetchone()) For OUT and INOUT parameters the user should provide the type of the parameter as well. The argument should be a tuple with first item as the value of the parameter to pass and second argument the type of the argument. In the above example, one can call callproc method like: args = (5, 5, (0, 'INT')) cursor.callproc('multiply', args) The type of the argument given in the tuple will be used by the MySQL CAST function to convert the values in the corresponding MySQL type (See CAST in MySQL Reference for more information) Does not return a value, but a result set will be available when the CALL-statement execute successfully. Raises exceptions when something is wrong. """ if not procname or not isinstance(procname, str): raise ValueError("procname must be a string") if not isinstance(args, (tuple, list)): raise ValueError("args must be a sequence") argfmt = "@_{name}_arg{index}" self._stored_results = [] results = [] try: argnames = [] argtypes = [] if args: for idx, arg in enumerate(args): argname = argfmt.format(name=procname, index=idx + 1) argnames.append(argname) if isinstance(arg, tuple): argtypes.append(" CAST({0} AS {1})".format(argname, arg[1])) self.execute("SET {0}=%s".format(argname), (arg[0],)) else: argtypes.append(argname) self.execute("SET {0}=%s".format(argname), (arg,)) call = "CALL {0}({1})".format(procname, ','.join(argnames)) # pylint: disable=W0212 # We disable consuming results temporary to make sure we # getting all results can_consume_results = self._connection._consume_results for result in self._connection.cmd_query_iter(call): self._connection._consume_results = False if self._raw: tmp = MySQLCursorBufferedRaw(self._connection._get_self()) else: tmp = MySQLCursorBuffered(self._connection._get_self()) tmp._executed = "(a result of {0})".format(call) tmp._handle_result(result) if tmp._warnings is not None: self._warnings = tmp._warnings if 'columns' in result: results.append(tmp) self._connection._consume_results = can_consume_results #pylint: enable=W0212 if argnames: select = "SELECT {0}".format(','.join(argtypes)) self.execute(select) self._stored_results = results return self.fetchone() else: self._stored_results = results return () except errors.Error: raise except Exception as err: raise errors.InterfaceError( "Failed calling stored routine; {0}".format(err)) def getlastrowid(self): """Returns the value generated for an AUTO_INCREMENT column Returns the value generated for an AUTO_INCREMENT column by the previous INSERT or UPDATE statement. Returns a long value or None. """ return self._last_insert_id def _fetch_warnings(self): """ Fetch warnings doing a SHOW WARNINGS. Can be called after getting the result. Returns a result set or None when there were no warnings. """ res = [] try: cur = self._connection.cursor(raw=False) cur.execute("SHOW WARNINGS") res = cur.fetchall() cur.close() except Exception as err: raise errors.InterfaceError( "Failed getting warnings; %s" % err) if len(res): return res return None def _handle_warnings(self): """Handle possible warnings after all results are consumed""" if self._connection.get_warnings is True and self._warning_count: self._warnings = self._fetch_warnings() def _handle_eof(self, eof): """Handle EOF packet""" self._connection.unread_result = False self._nextrow = (None, None) self._warning_count = eof['warning_count'] self._handle_warnings() if self._connection.raise_on_warnings is True and self._warnings: raise errors.get_mysql_exception( self._warnings[0][1], self._warnings[0][2]) def _fetch_row(self): """Returns the next row in the result set Returns a tuple or None. """ if not self._have_unread_result(): return None row = None if self._nextrow == (None, None): (row, eof) = self._connection.get_row( binary=self._binary, columns=self.description) else: (row, eof) = self._nextrow if row: self._nextrow = self._connection.get_row( binary=self._binary, columns=self.description) eof = self._nextrow[1] if eof is not None: self._handle_eof(eof) if self._rowcount == -1: self._rowcount = 1 else: self._rowcount += 1 if eof: self._handle_eof(eof) return row def fetchone(self): """Returns next row of a query result set Returns a tuple or None. """ row = self._fetch_row() if row: if hasattr(self._connection, 'converter'): return self._connection.converter.row_to_python( row, self.description) return row return None def fetchmany(self, size=None): res = [] cnt = (size or self.arraysize) while cnt > 0 and self._have_unread_result(): cnt -= 1 row = self.fetchone() if row: res.append(row) return res def fetchall(self): if not self._have_unread_result(): raise errors.InterfaceError("No result set to fetch from.") (rows, eof) = self._connection.get_rows() if self._nextrow[0]: rows.insert(0, self._nextrow[0]) if hasattr(self._connection, 'converter'): row_to_python = self._connection.converter.row_to_python rows = [row_to_python(row, self.description) for row in rows] self._handle_eof(eof) rowcount = len(rows) if rowcount >= 0 and self._rowcount == -1: self._rowcount = 0 self._rowcount += rowcount return rows @property def column_names(self): """Returns column names This property returns the columns names as a tuple. Returns a tuple. """ if not self.description: return () return tuple([d[0] for d in self.description]) @property def statement(self): """Returns the executed statement This property returns the executed statement. When multiple statements were executed, the current statement in the iterator will be returned. """ if self._executed is None: return None try: return self._executed.strip().decode('utf-8') except (AttributeError, UnicodeDecodeError): return self._executed.strip() @property def with_rows(self): """Returns whether the cursor could have rows returned This property returns True when column descriptions are available and possibly also rows, which will need to be fetched. Returns True or False. """ if not self.description: return False return True def __str__(self): fmt = "{class_name}: {stmt}" if self._executed: try: executed = self._executed.decode('utf-8') except AttributeError: executed = self._executed if len(executed) > 40: executed = executed[:40] + '..' else: executed = '(Nothing executed yet)' return fmt.format(class_name=self.__class__.__name__, stmt=executed) class MySQLCursorBuffered(MySQLCursor): """Cursor which fetches rows within execute()""" def __init__(self, connection=None): MySQLCursor.__init__(self, connection) self._rows = None self._next_row = 0 def _handle_resultset(self): (self._rows, eof) = self._connection.get_rows() self._rowcount = len(self._rows) self._handle_eof(eof) self._next_row = 0 try: self._connection.unread_result = False except: pass def reset(self, free=True): self._rows = None def _fetch_row(self): row = None try: row = self._rows[self._next_row] except: return None else: self._next_row += 1 return row return None def fetchall(self): if self._rows is None: raise errors.InterfaceError("No result set to fetch from.") res = [] if hasattr(self._connection, 'converter'): for row in self._rows[self._next_row:]: res.append(self._connection.converter.row_to_python( row, self.description)) else: res = self._rows[self._next_row:] self._next_row = len(self._rows) return res def fetchmany(self, size=None): res = [] cnt = (size or self.arraysize) while cnt > 0: cnt -= 1 row = self.fetchone() if row: res.append(row) return res @property def with_rows(self): return self._rows is not None class MySQLCursorRaw(MySQLCursor): """ Skips conversion from MySQL datatypes to Python types when fetching rows. """ _raw = True def fetchone(self): row = self._fetch_row() if row: return row return None def fetchall(self): if not self._have_unread_result(): raise errors.InterfaceError("No result set to fetch from.") (rows, eof) = self._connection.get_rows() if self._nextrow[0]: rows.insert(0, self._nextrow[0]) self._handle_eof(eof) rowcount = len(rows) if rowcount >= 0 and self._rowcount == -1: self._rowcount = 0 self._rowcount += rowcount return rows class MySQLCursorBufferedRaw(MySQLCursorBuffered): """ Cursor which skips conversion from MySQL datatypes to Python types when fetching rows and fetches rows within execute(). """ _raw = True def fetchone(self): row = self._fetch_row() if row: return row return None def fetchall(self): if self._rows is None: raise errors.InterfaceError("No result set to fetch from.") return [r for r in self._rows[self._next_row:]] @property def with_rows(self): return self._rows is not None class MySQLCursorPrepared(MySQLCursor): """Cursor using MySQL Prepared Statements """ def __init__(self, connection=None): super(MySQLCursorPrepared, self).__init__(connection) self._rows = None self._next_row = 0 self._prepared = None self._binary = True self._have_result = None def callproc(self, *args, **kwargs): """Calls a stored procedue Not supported with MySQLCursorPrepared. """ raise errors.NotSupportedError() def close(self): """Close the cursor This method will try to deallocate the prepared statement and close the cursor. """ if self._prepared: try: self._connection.cmd_stmt_close(self._prepared['statement_id']) except errors.Error: # We tried to deallocate, but it's OK when we fail. pass self._prepared = None super(MySQLCursorPrepared, self).close() def _row_to_python(self, rowdata, desc=None): """Convert row data from MySQL to Python types The conversion is done while reading binary data in the protocol module. """ pass def _handle_result(self, res): """Handle result after execution""" if isinstance(res, dict): self._connection.unread_result = False self._have_result = False self._handle_noresultset(res) else: self._description = res[1] self._connection.unread_result = True self._have_result = True def execute(self, operation, params=(), multi=False): # multi is unused """Prepare and execute a MySQL Prepared Statement This method will preare the given operation and execute it using the optionally given parameters. If the cursor instance already had a prepared statement, it is first closed. """ if operation is not self._executed: if self._prepared: self._connection.cmd_stmt_close(self._prepared['statement_id']) self._executed = operation try: if not isinstance(operation, bytes): operation = operation.encode(self._connection.charset) except (UnicodeDecodeError, UnicodeEncodeError) as err: raise errors.ProgrammingError(str(err)) # need to convert %s to ? before sending it to MySQL if b'%s' in operation: operation = re.sub(RE_SQL_FIND_PARAM, b'?', operation) try: self._prepared = self._connection.cmd_stmt_prepare(operation) except errors.Error: self._executed = None raise self._connection.cmd_stmt_reset(self._prepared['statement_id']) if self._prepared['parameters'] and not params: return elif len(self._prepared['parameters']) != len(params): raise errors.ProgrammingError( errno=1210, msg="Incorrect number of arguments " \ "executing prepared statement") res = self._connection.cmd_stmt_execute( self._prepared['statement_id'], data=params, parameters=self._prepared['parameters']) self._handle_result(res) def executemany(self, operation, seq_params): """Prepare and execute a MySQL Prepared Statement many times This method will prepare the given operation and execute with each tuple found the list seq_params. If the cursor instance already had a prepared statement, it is first closed. executemany() simply calls execute(). """ rowcnt = 0 try: for params in seq_params: self.execute(operation, params) if self.with_rows and self._have_unread_result(): self.fetchall() rowcnt += self._rowcount except (ValueError, TypeError) as err: raise errors.InterfaceError( "Failed executing the operation; {error}".format(error=err)) except: # Raise whatever execute() raises raise self._rowcount = rowcnt def fetchone(self): """Returns next row of a query result set Returns a tuple or None. """ return self._fetch_row() or None def fetchmany(self, size=None): res = [] cnt = (size or self.arraysize) while cnt > 0 and self._have_unread_result(): cnt -= 1 row = self._fetch_row() if row: res.append(row) return res def fetchall(self): if not self._have_unread_result(): raise errors.InterfaceError("No result set to fetch from.") (rows, eof) = self._connection.get_rows( binary=self._binary, columns=self.description) self._rowcount = len(rows) self._handle_eof(eof) return rows class MySQLCursorDict(MySQLCursor): """ Cursor fetching rows as dictionaries. The fetch methods of this class will return dictionaries instead of tuples. Each row is a dictionary that looks like: row = { "col1": value1, "col2": value2 } """ def _row_to_python(self, rowdata, desc=None): """Convert a MySQL text result row to Python types Returns a dictionary. """ if hasattr(self._connection, 'converter'): row = self._connection.converter.row_to_python(rowdata, desc) else: row = rowdata if row: return dict(zip(self.column_names, row)) return None def fetchone(self): """Returns next row of a query result set """ row = self._fetch_row() if row: return self._row_to_python(row, self.description) return None def fetchall(self): """Returns all rows of a query result set """ if not self._have_unread_result(): raise errors.InterfaceError(ERR_NO_RESULT_TO_FETCH) (rows, eof) = self._connection.get_rows() if self._nextrow[0]: rows.insert(0, self._nextrow[0]) res = [self._row_to_python(row, self.description) for row in rows] self._handle_eof(eof) rowcount = len(rows) if rowcount >= 0 and self._rowcount == -1: self._rowcount = 0 self._rowcount += rowcount return res class MySQLCursorNamedTuple(MySQLCursor): """ Cursor fetching rows as named tuple. The fetch methods of this class will return namedtuples instead of tuples. Each row is returned as a namedtuple and the values can be accessed as: row.col1, row.col2 """ def _row_to_python(self, rowdata, desc=None): """Convert a MySQL text result row to Python types Returns a named tuple. """ if hasattr(self._connection, 'converter'): row = self._connection.converter.row_to_python(rowdata, desc) else: row = rowdata if row: # pylint: disable=W0201 self.named_tuple = namedtuple('Row', self.column_names) # pylint: enable=W0201 return self.named_tuple(*row) def fetchone(self): """Returns next row of a query result set """ row = self._fetch_row() if row: if hasattr(self._connection, 'converter'): return self._row_to_python(row, self.description) else: return row return None def fetchall(self): """Returns all rows of a query result set """ if not self._have_unread_result(): raise errors.InterfaceError(ERR_NO_RESULT_TO_FETCH) (rows, eof) = self._connection.get_rows() if self._nextrow[0]: rows.insert(0, self._nextrow[0]) res = [self._row_to_python(row, self.description) for row in rows] self._handle_eof(eof) rowcount = len(rows) if rowcount >= 0 and self._rowcount == -1: self._rowcount = 0 self._rowcount += rowcount return res class MySQLCursorBufferedDict(MySQLCursorDict, MySQLCursorBuffered): """ Buffered Cursor fetching rows as dictionaries. """ def fetchone(self): """Returns next row of a query result set """ row = self._fetch_row() if row: return self._row_to_python(row, self.description) return None def fetchall(self): """Returns all rows of a query result set """ if self._rows is None: raise errors.InterfaceError(ERR_NO_RESULT_TO_FETCH) res = [] for row in self._rows[self._next_row:]: res.append(self._row_to_python( row, self.description)) self._next_row = len(self._rows) return res class MySQLCursorBufferedNamedTuple(MySQLCursorNamedTuple, MySQLCursorBuffered): """ Buffered Cursor fetching rows as named tuple. """ def fetchone(self): """Returns next row of a query result set """ row = self._fetch_row() if row: return self._row_to_python(row, self.description) return None def fetchall(self): """Returns all rows of a query result set """ if self._rows is None: raise errors.InterfaceError(ERR_NO_RESULT_TO_FETCH) res = [] for row in self._rows[self._next_row:]: res.append(self._row_to_python( row, self.description)) self._next_row = len(self._rows) return res mysql-utilities-1.6.4/mysql/connector/optionfiles.py0000644001577100752670000003303312717544565022450 0ustar pb2usercommon# MySQL Connector/Python - MySQL driver written in Python. # Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # MySQL Connector/Python is licensed under the terms of the GPLv2 # , like most # MySQL Connectors. There are special exceptions to the terms and # conditions of the GPLv2 as it is applied to this software, see the # FOSS License Exception # . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA """Implements parser to parse MySQL option files. """ import codecs import io import os import re from .catch23 import PY2 from .constants import DEFAULT_CONFIGURATION, CNX_POOL_ARGS, CNX_FABRIC_ARGS # pylint: disable=F0401 if PY2: from ConfigParser import SafeConfigParser, MissingSectionHeaderError else: from configparser import (ConfigParser as SafeConfigParser, MissingSectionHeaderError) # pylint: enable=F0401 DEFAULT_EXTENSIONS = { 'nt': ('ini', 'cnf'), 'posix': ('cnf',) } def read_option_files(**config): """ Read option files for connection parameters. Checks if connection arguments contain option file arguments, and then reads option files accordingly. """ if 'option_files' in config: try: if isinstance(config['option_groups'], str): config['option_groups'] = [config['option_groups']] groups = config['option_groups'] del config['option_groups'] except KeyError: groups = ['client', 'connector_python'] if isinstance(config['option_files'], str): config['option_files'] = [config['option_files']] option_parser = MySQLOptionsParser(list(config['option_files']), keep_dashes=False) del config['option_files'] config_from_file = option_parser.get_groups_as_dict_with_priority( *groups) config_options = {} fabric_options = {} for group in groups: try: for option, value in config_from_file[group].items(): try: if option == 'socket': option = 'unix_socket' if option in CNX_FABRIC_ARGS: if (option not in fabric_options or fabric_options[option][1] <= value[1]): fabric_options[option] = value continue if (option not in CNX_POOL_ARGS and option not in ['fabric', 'failover']): # pylint: disable=W0104 DEFAULT_CONFIGURATION[option] # pylint: enable=W0104 if (option not in config_options or config_options[option][1] <= value[1]): config_options[option] = value except KeyError: if group is 'connector_python': raise AttributeError("Unsupported argument " "'{0}'".format(option)) except KeyError: continue not_evaluate = ('password', 'passwd') for option, value in config_options.items(): if option not in config: try: if option in not_evaluate: config[option] = value[0] else: config[option] = eval(value[0]) # pylint: disable=W0123 except (NameError, SyntaxError): config[option] = value[0] if fabric_options: config['fabric'] = {} for option, value in fabric_options.items(): try: # pylint: disable=W0123 config['fabric'][option.split('_', 1)[1]] = eval(value[0]) # pylint: enable=W0123 except (NameError, SyntaxError): config['fabric'][option.split('_', 1)[1]] = value[0] return config class MySQLOptionsParser(SafeConfigParser): # pylint: disable=R0901 """This class implements methods to parse MySQL option files""" def __init__(self, files=None, keep_dashes=True): # pylint: disable=W0231 """Initialize If defaults is True, default option files are read first Raises ValueError if defaults is set to True but defaults files cannot be found. """ # Regular expression to allow options with no value(For Python v2.6) self.OPTCRE = re.compile( # pylint: disable=C0103 r'(?P