oslo.concurrency-4.0.2/0000775000175000017500000000000013643050745015053 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/.zuul.yaml0000664000175000017500000000045013643050652017010 0ustar zuulzuul00000000000000- project: templates: - check-requirements - lib-forward-testing-python3 - openstack-cover-jobs - openstack-lower-constraints-jobs - openstack-python3-ussuri-jobs - periodic-stable-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 oslo.concurrency-4.0.2/oslo.concurrency.egg-info/0000775000175000017500000000000013643050745022052 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo.concurrency.egg-info/top_level.txt0000664000175000017500000000002113643050745024575 0ustar zuulzuul00000000000000oslo_concurrency oslo.concurrency-4.0.2/oslo.concurrency.egg-info/SOURCES.txt0000664000175000017500000000475313643050745023747 0ustar zuulzuul00000000000000.coveragerc .mailmap .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE README.rst babel.cfg lower-constraints.txt requirements.txt setup.cfg setup.py test-requirements.txt tox.ini doc/requirements.txt doc/source/conf.py doc/source/index.rst doc/source/admin/index.rst doc/source/configuration/index.rst doc/source/contributor/contributing.rst doc/source/contributor/history.rst doc/source/contributor/index.rst doc/source/install/index.rst doc/source/reference/fixture.lockutils.rst doc/source/reference/index.rst doc/source/reference/lockutils.rst doc/source/reference/opts.rst doc/source/reference/processutils.rst doc/source/reference/watchdog.rst doc/source/user/index.rst oslo.concurrency.egg-info/PKG-INFO oslo.concurrency.egg-info/SOURCES.txt oslo.concurrency.egg-info/dependency_links.txt oslo.concurrency.egg-info/entry_points.txt oslo.concurrency.egg-info/not-zip-safe oslo.concurrency.egg-info/pbr.json oslo.concurrency.egg-info/requires.txt oslo.concurrency.egg-info/top_level.txt oslo_concurrency/__init__.py oslo_concurrency/_i18n.py oslo_concurrency/lockutils.py oslo_concurrency/opts.py oslo_concurrency/prlimit.py oslo_concurrency/processutils.py oslo_concurrency/version.py oslo_concurrency/watchdog.py oslo_concurrency/fixture/__init__.py oslo_concurrency/fixture/lockutils.py oslo_concurrency/locale/de/LC_MESSAGES/oslo_concurrency.po oslo_concurrency/locale/en_GB/LC_MESSAGES/oslo_concurrency.po oslo_concurrency/locale/es/LC_MESSAGES/oslo_concurrency.po oslo_concurrency/locale/fr/LC_MESSAGES/oslo_concurrency.po oslo_concurrency/tests/__init__.py oslo_concurrency/tests/unit/__init__.py oslo_concurrency/tests/unit/test_lockutils.py oslo_concurrency/tests/unit/test_lockutils_eventlet.py oslo_concurrency/tests/unit/test_processutils.py releasenotes/notes/add-option-for-fair-locks-b6d660e97683cec6.yaml releasenotes/notes/add-python-exec-kwarg-3a7a0c0849f9bb21.yaml releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml releasenotes/notes/drop-python27-support-7d837a45dae941bb.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po releasenotes/source/locale/fr/LC_MESSAGES/releasenotes.pooslo.concurrency-4.0.2/oslo.concurrency.egg-info/not-zip-safe0000664000175000017500000000000113643050745024300 0ustar zuulzuul00000000000000 oslo.concurrency-4.0.2/oslo.concurrency.egg-info/PKG-INFO0000664000175000017500000000372313643050745023154 0ustar zuulzuul00000000000000Metadata-Version: 1.2 Name: oslo.concurrency Version: 4.0.2 Summary: Oslo Concurrency library Home-page: https://docs.openstack.org/oslo.concurrency/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/oslo.concurrency.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ================ oslo.concurrency ================ .. image:: https://img.shields.io/pypi/v/oslo.concurrency.svg :target: https://pypi.org/project/oslo.concurrency/ :alt: Latest Version The oslo.concurrency library has utilities for safely running multi-thread, multi-process applications using locking mechanisms and for running external processes. * Free software: Apache license * Documentation: https://docs.openstack.org/oslo.concurrency/latest/ * Source: https://opendev.org/openstack/oslo.concurrency * Bugs: https://bugs.launchpad.net/oslo.concurrency * Release Notes: https://docs.openstack.org/releasenotes/oslo.concurrency/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: Implementation :: CPython Requires-Python: >=3.6 oslo.concurrency-4.0.2/oslo.concurrency.egg-info/entry_points.txt0000664000175000017500000000021613643050745025347 0ustar zuulzuul00000000000000[console_scripts] lockutils-wrapper = oslo_concurrency.lockutils:main [oslo.config.opts] oslo.concurrency = oslo_concurrency.opts:list_opts oslo.concurrency-4.0.2/oslo.concurrency.egg-info/dependency_links.txt0000664000175000017500000000000113643050745026120 0ustar zuulzuul00000000000000 oslo.concurrency-4.0.2/oslo.concurrency.egg-info/requires.txt0000664000175000017500000000015013643050745024446 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 oslo.config>=5.2.0 oslo.i18n>=3.15.3 oslo.utils>=3.33.0 six>=1.10.0 fasteners>=0.7.0 oslo.concurrency-4.0.2/oslo.concurrency.egg-info/pbr.json0000664000175000017500000000005613643050745023531 0ustar zuulzuul00000000000000{"git_version": "60a157a", "is_release": true}oslo.concurrency-4.0.2/requirements.txt0000664000175000017500000000060213643050652020332 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 oslo.config>=5.2.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 six>=1.10.0 # MIT fasteners>=0.7.0 # Apache-2.0 oslo.concurrency-4.0.2/setup.py0000664000175000017500000000170113643050652016561 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) oslo.concurrency-4.0.2/babel.cfg0000664000175000017500000000002013643050652016566 0ustar zuulzuul00000000000000[python: **.py] oslo.concurrency-4.0.2/LICENSE0000664000175000017500000002363613643050652016067 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. oslo.concurrency-4.0.2/.coveragerc0000664000175000017500000000017313643050652017172 0ustar zuulzuul00000000000000[run] branch = True source = oslo_concurrency omit = oslo_concurrency/tests/* [report] ignore_errors = True precision = 2 oslo.concurrency-4.0.2/HACKING.rst0000664000175000017500000000017113643050652016645 0ustar zuulzuul00000000000000Style Commandments ================== Read the OpenStack Style Commandments https://docs.openstack.org/hacking/lastest/ oslo.concurrency-4.0.2/PKG-INFO0000664000175000017500000000372313643050745016155 0ustar zuulzuul00000000000000Metadata-Version: 1.2 Name: oslo.concurrency Version: 4.0.2 Summary: Oslo Concurrency library Home-page: https://docs.openstack.org/oslo.concurrency/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/oslo.concurrency.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ================ oslo.concurrency ================ .. image:: https://img.shields.io/pypi/v/oslo.concurrency.svg :target: https://pypi.org/project/oslo.concurrency/ :alt: Latest Version The oslo.concurrency library has utilities for safely running multi-thread, multi-process applications using locking mechanisms and for running external processes. * Free software: Apache license * Documentation: https://docs.openstack.org/oslo.concurrency/latest/ * Source: https://opendev.org/openstack/oslo.concurrency * Bugs: https://bugs.launchpad.net/oslo.concurrency * Release Notes: https://docs.openstack.org/releasenotes/oslo.concurrency/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: Implementation :: CPython Requires-Python: >=3.6 oslo.concurrency-4.0.2/releasenotes/0000775000175000017500000000000013643050745017544 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/0000775000175000017500000000000013643050745021044 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/conf.py0000664000175000017500000002110513643050652022337 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options repository_name = 'openstack/oslo.config' bug_project = 'oslo.config' bug_tag = '' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2016, oslo.concurrency Developers' # Release notes do not need a version in the title, they span # multiple versions. # The full version, including alpha/beta/rc tags. release = '' # The short X.Y version. version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'oslo.concurrencyReleaseNotesDoc' # -- Options for LaTeX output --------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'oslo.concurrencyReleaseNotes.tex', u'oslo.concurrency Release Notes Documentation', u'oslo.concurrency Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'oslo.concurrencyReleaseNotes', u'oslo.concurrency Release Notes Documentation', [u'oslo.concurrency Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'oslo.concurrencyReleaseNotes', u'oslo.concurrency Release Notes Documentation', u'oslo.concurrency Developers', 'oslo.concurrencyReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] oslo.concurrency-4.0.2/releasenotes/source/train.rst0000664000175000017500000000017613643050652022714 0ustar zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train oslo.concurrency-4.0.2/releasenotes/source/rocky.rst0000664000175000017500000000022113643050652022715 0ustar zuulzuul00000000000000=================================== Rocky Series Release Notes =================================== .. release-notes:: :branch: stable/rocky oslo.concurrency-4.0.2/releasenotes/source/index.rst0000664000175000017500000000033113643050652022677 0ustar zuulzuul00000000000000================================ oslo.concurrency Release Notes ================================ .. toctree:: :maxdepth: 1 unreleased train stein rocky queens pike ocata newton oslo.concurrency-4.0.2/releasenotes/source/locale/0000775000175000017500000000000013643050745022303 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/locale/en_GB/0000775000175000017500000000000013643050745023255 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/locale/en_GB/LC_MESSAGES/0000775000175000017500000000000013643050745025042 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po0000664000175000017500000000341213643050652030070 0ustar zuulzuul00000000000000# Andi Chandler , 2016. #zanata # Andi Chandler , 2017. #zanata # Andi Chandler , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: oslo.concurrency\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2018-08-27 12:03+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-09-16 09:03+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en_GB\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "3.10.0" msgstr "3.10.0" msgid "3.25.0" msgstr "3.25.0" msgid "" "A new kwarg, ``python_exec`` is added to the execute() function in the " "processutils module. This option is used to specify the path to the python " "executable to use for prlimits enforcement." msgstr "" "A new kwarg, ``python_exec`` is added to the execute() function in the " "processutils module. This option is used to specify the path to the Python " "executable to use for prlimits enforcement." msgid "New Features" msgstr "New Features" msgid "Newton Series Release Notes" msgstr "Newton Series Release Notes" msgid "Ocata Series Release Notes" msgstr "Ocata Series Release Notes" msgid "Other Notes" msgstr "Other Notes" msgid "Pike Series Release Notes" msgstr "Pike Series Release Notes" msgid "Queens Series Release Notes" msgstr "Queens Series Release Notes" msgid "Rocky Series Release Notes" msgstr "Rocky Series Release Notes" msgid "Switch to reno for managing release notes." msgstr "Switch to reno for managing release notes." msgid "Unreleased Release Notes" msgstr "Unreleased Release Notes" msgid "oslo.concurrency Release Notes" msgstr "oslo.concurrency Release Notes" oslo.concurrency-4.0.2/releasenotes/source/locale/fr/0000775000175000017500000000000013643050745022712 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/locale/fr/LC_MESSAGES/0000775000175000017500000000000013643050745024477 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/locale/fr/LC_MESSAGES/releasenotes.po0000664000175000017500000000172713643050652027534 0ustar zuulzuul00000000000000# Gérald LONLAS , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: oslo.concurrency Release Notes 3.15.1\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2016-10-25 16:33+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-10-22 05:58+0000\n" "Last-Translator: Gérald LONLAS \n" "Language-Team: French\n" "Language: fr\n" "X-Generator: Zanata 3.7.3\n" "Plural-Forms: nplurals=2; plural=(n > 1)\n" msgid "3.10.0" msgstr "3.10.0" msgid "Newton Series Release Notes" msgstr "Note de release pour Newton" msgid "Other Notes" msgstr "Autres notes" msgid "Switch to reno for managing release notes." msgstr "Commence à utiliser reno pour la gestion des notes de release" msgid "Unreleased Release Notes" msgstr "Note de release pour les changements non déployées" msgid "oslo.concurrency Release Notes" msgstr "Note de release pour oslo.concurrency" oslo.concurrency-4.0.2/releasenotes/source/ocata.rst0000664000175000017500000000023013643050652022655 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata oslo.concurrency-4.0.2/releasenotes/source/newton.rst0000664000175000017500000000021613643050652023104 0ustar zuulzuul00000000000000============================= Newton Series Release Notes ============================= .. release-notes:: :branch: origin/stable/newton oslo.concurrency-4.0.2/releasenotes/source/_static/0000775000175000017500000000000013643050745022472 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/_static/.placeholder0000664000175000017500000000000013643050652024740 0ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/pike.rst0000664000175000017500000000021713643050652022523 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike oslo.concurrency-4.0.2/releasenotes/source/queens.rst0000664000175000017500000000022313643050652023070 0ustar zuulzuul00000000000000=================================== Queens Series Release Notes =================================== .. release-notes:: :branch: stable/queens oslo.concurrency-4.0.2/releasenotes/source/stein.rst0000664000175000017500000000022113643050652022710 0ustar zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein oslo.concurrency-4.0.2/releasenotes/source/_templates/0000775000175000017500000000000013643050745023201 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/_templates/.placeholder0000664000175000017500000000000013643050652025447 0ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/source/unreleased.rst0000664000175000017500000000014413643050652023721 0ustar zuulzuul00000000000000========================== Unreleased Release Notes ========================== .. release-notes:: oslo.concurrency-4.0.2/releasenotes/notes/0000775000175000017500000000000013643050745020674 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/releasenotes/notes/add-option-for-fair-locks-b6d660e97683cec6.yaml0000664000175000017500000000134513643050652030666 0ustar zuulzuul00000000000000--- prelude: > This release includes optional support for fair locks. When fair locks are specified, blocking waiters will acquire the lock in the order that they blocked. features: - | We now have optional support for ``fair`` locks. When fair locks are specified, blocking waiters will acquire the lock in the order that they blocked. This can be useful to ensure that existing blocked waiters do not wait indefinitely in the face of large numbers of new attempts to acquire the lock. When specifying locks as both ``external`` and ``fair``, the ordering *within* a given process will be fair, but the ordering *between* processes will be determined by the behaviour of the underlying OS. oslo.concurrency-4.0.2/releasenotes/notes/add-python-exec-kwarg-3a7a0c0849f9bb21.yaml0000664000175000017500000000033013643050652030062 0ustar zuulzuul00000000000000--- features: - A new kwarg, ``python_exec`` is added to the execute() function in the processutils module. This option is used to specify the path to the python executable to use for prlimits enforcement. oslo.concurrency-4.0.2/releasenotes/notes/drop-python27-support-7d837a45dae941bb.yaml0000664000175000017500000000017113643050652030222 0ustar zuulzuul00000000000000--- upgrade: - | Python 2.7 is no longer supported. The minimum supported version of Python is now Python 3.6. oslo.concurrency-4.0.2/releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml0000664000175000017500000000007113643050652025552 0ustar zuulzuul00000000000000--- other: - Switch to reno for managing release notes.oslo.concurrency-4.0.2/lower-constraints.txt0000664000175000017500000000215513643050652021311 0ustar zuulzuul00000000000000alabaster==0.7.10 appdirs==1.3.0 Babel==2.3.4 bandit==1.1.0 coverage==4.0 debtcollector==1.2.0 docutils==0.11 dulwich==0.15.0 enum34==1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD eventlet==0.18.2 extras==1.0.0 fasteners==0.7.0 fixtures==3.0.0 flake8==2.5.5 futures==3.0.0;python_version=='2.7' or python_version=='2.6' # BSD gitdb==0.6.4 GitPython==1.0.1 greenlet==0.4.10 hacking==0.12.0 imagesize==0.7.1 iso8601==0.1.11 Jinja2==2.10 keystoneauth1==3.4.0 linecache2==1.0.0 MarkupSafe==1.0 mccabe==0.2.1 monotonic==0.6 mox3==0.20.0 netaddr==0.7.18 netifaces==0.10.4 openstackdocstheme==1.20.0 os-client-config==1.28.0 oslo.config==5.2.0 oslo.i18n==3.15.3 oslo.utils==3.33.0 oslotest==3.2.0 pbr==2.0.0 pep8==1.5.7 pyflakes==0.8.1 Pygments==2.2.0 pyparsing==2.1.0 python-mimeparse==1.6.0 python-subunit==1.0.0 pytz==2013.6 PyYAML==3.12 reno==2.5.0 requests==2.14.2 requestsexceptions==1.2.0 rfc3986==0.3.1 six==1.10.0 smmap==0.9.0 snowballstemmer==1.2.1 Sphinx==1.8.0 sphinxcontrib-websupport==1.0.1 stevedore==1.20.0 stestr==2.0.0 testtools==2.2.0 traceback2==1.4.0 unittest2==1.1.0 wrapt==1.7.0 oslo.concurrency-4.0.2/setup.cfg0000664000175000017500000000247413643050745016703 0ustar zuulzuul00000000000000[metadata] name = oslo.concurrency summary = Oslo Concurrency library description-file = README.rst author = OpenStack author-email = openstack-discuss@lists.openstack.org home-page = https://docs.openstack.org/oslo.concurrency/latest/ python-requires = >=3.6 classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3 :: Only Programming Language :: Python :: Implementation :: CPython [files] packages = oslo_concurrency [entry_points] oslo.config.opts = oslo.concurrency = oslo_concurrency.opts:list_opts console_scripts = lockutils-wrapper = oslo_concurrency.lockutils:main [compile_catalog] directory = oslo_concurrency/locale domain = oslo_concurrency [update_catalog] domain = oslo_concurrency output_dir = oslo_concurrency/locale input_file = oslo_concurrency/locale/oslo_concurrency.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = oslo_concurrency/locale/oslo_concurrency.pot [egg_info] tag_build = tag_date = 0 oslo.concurrency-4.0.2/AUTHORS0000664000175000017500000001020313643050745016117 0ustar zuulzuul00000000000000Alex Gaynor Alexander Gorodnev Amrith Kumar Andreas Jaeger Andreas Jaeger Angus Lees Ann Kamyshnikova Arata Notsu Ben Nemec Ben Nemec Brad Pokorny Brant Knudson Brian D. Elliott Brian Rosmaita Chang Bo Guo ChangBo Guo(gcb) Chris Friesen Christian Berendt Chuck Short Claudiu Belu Corey Bryant Csaba Henk Dan Prince Daniel P. Berrange Davanum Srinivas Davanum Srinivas David Ripton Denis Buliga Dina Belova Dirk Mueller Doug Hellmann Doug Hellmann Eric Fried Eric Windisch Flaper Fesp Flavio Percoco Gary Kotton Gary Kotton Gevorg Davoian Ghanshyam Mann Gorka Eguileor Hervé Beraud IWAMOTO Toshihiro Ian Cordasco Ihar Hrachyshka James Carey Jason Kölker Jay S. Bryant Jeremy Stanley Joe Gordon Joe Gordon Joe Heck Johannes Erdfelt Joshua Harlow Joshua Harlow Joshua Harlow Julien Danjou Kenneth Giusti Kirill Bespalov Lucian Petrut Mark McLoughlin Matt Riedemann Matthew Treinish Matthew Treinish Michael Still Monty Taylor Nikhil Manchanda Noorul Islam K M OpenStack Release Bot Pedro Navarro Perez Roman Prykhodchenko Ronald Bradford Russell Bryant Salvatore Orlando Sean Dague Sean M. Collins Sean McGinnis Sergey Kraynev Sergey Lukjanov Shawn Boyette Stephen Finucane Steve Kowalik Steve Martinelli Thomas Bechtold Thomas Herve Tony Breeds Victor Sergeyev Victor Stinner Victor Stinner Vieri <15050873171@163.com> Vu Cong Tuan Wu Wenxiang Yuriy Taraday ZhiQiang Fan ZhijunWei ZhongShengping Zhongyue Luo caoyuan gecong1973 gengchc2 howardlee jacky06 jichenjc loooosy melissaml pengyuesheng prashkre ricolin shangxiaobj shupeng <15050873171@163.com> vponomaryov wangqi yanheven zhangsong oslo.concurrency-4.0.2/test-requirements.txt0000664000175000017500000000073113643050652021312 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking>=3.0,<3.1.0 # Apache-2.0 oslotest>=3.2.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD stestr>=2.0.0 # Apache-2.0 eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT # Bandit security code scanner bandit>=1.1.0,<1.6.0 # Apache-2.0 oslo.concurrency-4.0.2/.stestr.conf0000664000175000017500000000007613643050652017324 0ustar zuulzuul00000000000000[DEFAULT] test_path=./oslo_concurrency/tests/unit top_path=./ oslo.concurrency-4.0.2/README.rst0000664000175000017500000000161613643050652016543 0ustar zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/oslo.concurrency.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ================ oslo.concurrency ================ .. image:: https://img.shields.io/pypi/v/oslo.concurrency.svg :target: https://pypi.org/project/oslo.concurrency/ :alt: Latest Version The oslo.concurrency library has utilities for safely running multi-thread, multi-process applications using locking mechanisms and for running external processes. * Free software: Apache license * Documentation: https://docs.openstack.org/oslo.concurrency/latest/ * Source: https://opendev.org/openstack/oslo.concurrency * Bugs: https://bugs.launchpad.net/oslo.concurrency * Release Notes: https://docs.openstack.org/releasenotes/oslo.concurrency/ oslo.concurrency-4.0.2/ChangeLog0000664000175000017500000004365513643050745016642 0ustar zuulzuul00000000000000CHANGES ======= 4.0.2 ----- * Use unittest.mock instead of third party mock * Update hacking for Python3 4.0.1 ----- * trivial: Cleanup tox.ini * ignore reno builds artifacts * remove outdated header * Stop to build universal wheel 4.0.0 ----- * Drop python 2.7 support and testing * Stop configuring install\_command in tox * tox: Trivial cleanup * Fix remove\_lock test 3.31.0 ------ * Spiff up docs for \*\_with\_prefix * Switch to Ussuri jobs * tox: Keeping going with docs * Document management and history of lock files * Bump the openstackdocstheme extension to 1.20 * Blacklist sphinx 2.1.0 (autodoc bug) * Update the constraints url * Update master for stable/train * Add lock\_with\_prefix convenience utility * Some test cleanup 3.30.0 ------ * Add Python 3 Train unit tests * Cap Bandit below 1.6.0 and update Sphinx requirement * Replace git.openstack.org URLs with opendev.org URLs * OpenDev Migration Patch * Dropping the py35 testing * Follow the new PTI for document build * Update master for stable/stein 3.29.1 ------ * add python 3.7 unit test job * Change python3.5 job to python3.7 job on Stein+ * Update hacking version * Stop using setup.py build\_sphinx * Update mailinglist from dev to discuss 3.29.0 ------ * Add support for fair locks * Clean up .gitignore references to personal tools * Don't quote {posargs} in tox.ini * Always build universal wheels 3.28.1 ------ * Imported Translations from Zanata * Use templates for cover and lower-constraints * ignore warning from bandit for using shell= * add lib-forward-testing-python3 test job * add python 3.6 unit test job * Remove PyPI downloads * import zuul job settings from project-config * Reorganize that 'Release Notes' in README * Update reno for stable/rocky * Switch to stestr * Add release notes link to README * fix tox python3 overrides * Remove stale pip-missing-reqs tox test * Trivial: Update pypi url to new url 3.27.0 ------ * set default python to python3 * Switch pep8 job to python 3 * fix lower constraints and uncap eventlet * add lower-constraints job * Updated from global requirements * Updated from global requirements 3.26.0 ------ * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Mask passwords only when command execution fails * Imported Translations from Zanata * Update reno for stable/queens * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements * Update doc links in CONTRIBUTING.rst and README.rst * Updated from global requirements 3.25.0 ------ * Add python\_exec kwarg to processutils.execute() * Updated from global requirements * add bandit to pep8 job 3.24.0 ------ * Remove -U from pip install * Avoid tox\_install.sh for constraints support * Updated from global requirements * Remove setting of version/release from releasenotes * Updated from global requirements * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata 3.23.0 ------ * Updated from global requirements * Updated from global requirements * Updated from global requirements 3.22.0 ------ * Minor correction to docstrings * Updated from global requirements * Updated from global requirements * Windows: ensure exec calls don't block other greenthreads * Update reno for stable/pike * Updated from global requirements * Add debug log to indicate when external lock is taken 3.21.0 ------ * Update URLs in documents according to document migration * Imported Translations from Zanata * Updated from global requirements * switch from oslosphinx to openstackdocstheme * turn on warning-is-error for sphinx * rearrange existing documentation to follow the new layout standard * Remove log translations * Check reStructuredText documents for common style issues * Updated from global requirements * Check for SubprocessError by name on Python 3.x 3.20.0 ------ * Updated from global requirements * Using fixtures.MockPatch instead of mockpatch.Patch 3.19.0 ------ * Updated from global requirements * [Fix gate]Update test requirement * Updated from global requirements * Remove support for py34 * pbr.version.VersionInfo needs package name (oslo.xyz and not oslo\_xyz) * Update reno for stable/ocata 3.18.0 ------ * Automatically convert process\_input to bytes * Add Constraints support * Show team and repo badges on README 3.16.0 ------ * Updated from global requirements * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Remove unnecessary requirements * [TrivialFix] Replace 'assertTrue(a in b)' with 'assertIn(a, b)' 3.15.0 ------ * Changed the home-page link * Change assertTrue(isinstance()) by optimal assert * Enable release notes translation * Ignore prlimit argument on Windows * Updated from global requirements * Updated from global requirements * Update reno for stable/newton 3.14.0 ------ * Updated from global requirements * Fix external lock tests on Windows 3.13.0 ------ * Updated from global requirements * Fix parameters of assertEqual are misplaced * Add Python 3.5 classifier and venv 3.12.0 ------ * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements 3.11.0 ------ * Imported Translations from Zanata 3.10.0 ------ * Imported Translations from Zanata * Updated from global requirements * Add reno for releasenotes management 3.9.0 ----- * Add doc/ to pep8 check * Remove unused import statement * Add timeout option to ssh\_execute * Fix wrong import example in docstring * Trivial: ignore openstack/common in flake8 exclude list 3.8.0 ----- * Updated from global requirements * Imported Translations from Zanata * processutils: add support for missing process limits * Remove direct dependency on babel * Updated from global requirements * Updated from global requirements * Updated from global requirements * Add a few usage examples for lockutils * Revert "Use tempfile.tempdir for lock\_path if OSLO\_LOCK\_PATH is not set" * Updated from global requirements * Use tempfile.tempdir for lock\_path if OSLO\_LOCK\_PATH is not set 3.6.0 ----- * Updated from global requirements 3.5.0 ----- * Updated from global requirements * Make ProcessExecutionError picklable * Updated from global requirements 3.4.0 ----- * Update translation setup * Add prlimit parameter to execute() * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements 3.3.0 ----- * Remove unnecessary package in setup.cfg * Updated from global requirements * Updated from global requirements 3.2.0 ----- * Updated from global requirements * Updated from global requirements * Trival: Remove 'MANIFEST.in' * Add complementary remove lock with prefix function 3.1.0 ----- * Drop python 2.6 support 3.0.0 ----- * Updated from global requirements * Updated from global requirements * Remove python 2.6 classifier * Remove python 2.6 and cleanup tox.ini * Use versionadded and versionchanged in doc * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements 2.8.0 ----- * Updated from global requirements 2.7.0 ----- * Fix Tests to run under OSX * Fix coverage configuration and execution * Imported Translations from Zanata * Move 'history' -> release notes section * add auto-generated docs for config options * Change ignore-errors to ignore\_errors * Updated from global requirements * Imported Translations from Zanata * Use int enumerations for log error constants 2.6.0 ----- * Removes unused posix-ipc requirement * Updated from global requirements * Updated from global requirements 2.5.0 ----- * Updated from global requirements * Updated from global requirements * Use oslo\_utils reflection to get 'f' callable name * flake8 - remove unused rules * Imported Translations from Transifex * Updated from global requirements 2.4.0 ----- * Imported Translations from Transifex * Updated from global requirements * Imported Translations from Transifex * Updated from global requirements 2.3.0 ----- * Imported Translations from Transifex * Allow preexec\_fn method for processutils.execute * Updated from global requirements * Use pypi name for requirements.txt * processutils: ensure on\_completion callback is always called * Updated from global requirements * Remove redundant fileutils * Add tox target to find missing requirements 2.2.0 ----- 2.1.0 ----- * Imported Translations from Transifex * Updated from global requirements * Ensure we 'join' on the timer watchdog thread * Use better timing mechanisms instead of time.time() * Updated from global requirements * Add 2 callbacks to processutils.execute() * Updated from global requirements * Fix LockFixture docstring * Updated from global requirements * Switch badges from 'pypip.in' to 'shields.io' * Updated from global requirements * Replace locks and replace with fasteners library provides ones 2.0.0 ----- * Remove oslo namespace package 1.10.0 ------ * Imported Translations from Transifex * Sync from oslo-incubator * Updated from global requirements * Advertise support for Python3.4 / Remove support for 3.3 * Updated from global requirements * Imported Translations from Transifex * Remove run\_cross\_tests.sh * Updated from global requirements * Updated from global requirements 1.9.0 ----- * Add binary parameter to execute and ssh\_execute * Port processutils to Python 3 * Uncap library requirements for liberty * Move fixtures to test-requirements.txt * Fix test\_as\_root\* tests to work when run as root * Add pypi download + version badges * Standardize setup.cfg summary for oslo libs * Imported Translations from Transifex * Updated from global requirements * Remove tools/run\_cross\_tests.sh from openstack-common.conf 1.8.0 ----- * Switch to non-namespaced module imports * Remove py33 env from default tox list * Add lockutils.get\_lock\_path() function 1.7.0 ----- * Imported Translations from Transifex * Updated from global requirements 1.6.0 ----- * Updated from global requirements * processutils: execute(): fix option incompatibility 1.5.0 ----- * Ability to set working directory * Add eventlet test check to new tests \_\_init\_\_.py * Drop use of namespaced oslo.i18n * Updated from global requirements * Updated from global requirements * Update Oslo imports to remove namespace package 1.4.1 ----- * Revert "Port processutils to Python 3" 0.4.0 ----- * Bump to hacking 0.10 * Updated from global requirements * add watchdog module * Updated from global requirements * make time format for processutils match lockutils * Correct the translation domain for loading messages * Add a reader/writer lock * Don't use ConfigFilter for lockutils * Report import warnings where the import occurs * Port processutils to Python 3 * Activate pep8 check that \_ is imported * Drop requirements-py3.txt * Updated from global requirements * Clean up API documentation * Workflow documentation is now in infra-manual * Remove noqa from test files * test compatibility for old imports * Fix bug link in README.rst 0.3.0 ----- * Add external lock fixture * Add a TODO for retrying pull request #20 * Allow the lock delay to be provided * Allow for providing a customized semaphore container * Move locale files to proper place * Flesh out the README * Move out of the oslo namespace package * Improve testing in py3 environment * Only modify autoindex.rst if it exists * Imported Translations from Transifex * lockutils-wrapper cleanup * Don't use variables that aren't initialized 0.2.0 ----- * Imported Translations from Transifex * Use six.wraps * Clean up lockutils logging * Remove unused incubator modules * Improve lock\_path help and documentation * Add pbr to installation requirements 0.1.0 ----- * Updated from global requirements * Imported Translations from Transifex * Updated from global requirements * Updated from global requirements * Remove extraneous vim editor configuration comments * Add deprecated name test case * Make lock\_wrapper private * Support building wheels (PEP-427) * Handle Python 3's O\_CLOEXEC default * Remove hard dep on eventlet * Test with both vanilla and eventlet stdlib * Imported Translations from Transifex * Fix coverage testing * Clean up doc header * Use ConfigFilter for opts * Make lockutils main() a console entry point * Expose lockutils opts to config generator * Add hacking import exception for i18n * Imported Translations from Transifex * provide sane cmd exit reporting * Imported Translations from Transifex * Add lock\_path as param to remove\_external function * Updated from global requirements * Cleanup and adding timing to lockutils logging * Imported Translations from Transifex * Remove oslo-incubator fixture * Break up the logging around the lockfile release/unlock * Always log the releasing, even under failure * Clarify logging in lockutils * Imported Translations from Transifex * Address race in file locking tests * Updated from global requirements * Imported Translations from Transifex * Updated from global requirements * Handle a failure on communicate() * Imported Translations from Transifex * Add code/api documentation * Add history file to documentation * Update contributing instructions * Work toward Python 3.4 support and testing * warn against sorting requirements * Log stdout, stderr and command on execute() error * Mask passwords in exceptions and error messages * Imported Translations from Transifex * Address some potential security issues in lockutils * Use file locks by default again * Switch to oslo.i18n in our code * Imported Translations from Transifex * Switch to oslo.utils in our code * Mask passwords in exceptions and error messages * Initial translation setup * Fix docs generation * Make all tests pass * exported from oslo-incubator by graduate.sh * Remove oslo.log from lockutils * lockutils: split tests and run in Python 3 * Fix exception message in openstack.common.processutils.execute * Allow test\_lockutils to run in isolation * Remove \`processutils\` dependency on \`log\` * Don't import fcntl on Windows * Fix broken formatting of processutils.execute log statement * Move nova.utils.cpu\_count() to processutils module * pep8: fixed multiple violations * fixed typos found by RETF rules * Mask passwords that are included in commands * Improve help strings * Remove str() from LOG.\* and exceptions * Fixed several typos * Emit a log statement when releasing internal lock * Allow passing environment variables to execute() * Use oslotest instead of common test module * Remove rendundant parentheses of cfg help strings * Allow external locks to work with threads * Re-enable file-based locking behavior * Use Posix IPC in lockutils * Update log translation domains * Update oslo log messages with translation domains * Move the released file lock to the successful path * Add remove external lock files API in lockutils * Catch OSError in processutils * Use threading.ThreadError instead of reraising IOError * Have the interprocess lock follow lock conventions * lockutils: move directory creation in lock class * lockutils: remove lock\_path parameter * lockutils: expand add\_prefix * lockutils: remove local usage * lockutils: do not grab the lock in creators * Remove unused variables * Utilizes assertIsNone and assertIsNotNone * Fix i18n problem in processutils module * lockutils: split code handling internal/external lock * lockutils: fix testcase wrt Semaphore * Use hacking import\_exceptions for gettextutils.\_ * Correct invalid docstrings * Fix violations of H302:import only modules * Fixed misspellings of common words * Trivial: Make vertical white space after license header consistent * Unify different names between Python2/3 with six.moves * Remove vim header * Use six.text\_type instead of unicode function in tests * Adjust import order according to PEP8 imports rule * fix lockutils.lock() to make it thread-safe * Add main() to lockutils that creates temp dir for locks * Allow lockutils to get lock\_path conf from envvar * Correct execute() to check 0 in check\_exit\_code * Replace assertEquals with assertEqual * Move LockFixture into a fixtures module * Fix to properly log when we release a semaphore * Add LockFixture to lockutils * Modify lockutils.py due to dispose of eventlet * Replace using tests.utils part2 * Fix processutils.execute errors on windows * Bump hacking to 0.7.0 * Replace using tests.utils with openstack.common.test * Allow passing a logging level to processutils.execute * BaseException.message is deprecated since Python 2.6 * Fix locking bug * Move synchronized body to a first-class function * Make lock\_file\_prefix optional * Enable H302 hacking check * Enable hacking H404 test * Use param keyword for docstrings * Use Python 3.x compatible octal literal notation * Use Python 3.x compatible except construct * Enable hacking H402 test * python3: python3 binary/text data compatbility * Removes len() on empty sequence evaluation * Added convenience APIs for lockutils * Import trycmd and ssh\_execute from nova * Update processutils * Use print\_function \_\_future\_\_ import * Improve Python 3.x compatibility * Replaces standard logging with common logging * Locking edge case when lock\_path does not exist * lockutils: add a failing unit test * lockutils: improve the external locks test * Removes unused imports in the tests module * Fix locking issues in Windows * Fix Copyright Headers - Rename LLC to Foundation * Use oslo-config-2013.1b3 * Emit a warning if RPC calls made with lock * Default lockutils to using a tempdir * Replace direct use of testtools BaseTestCase * Use testtools as test base class * Start adding reusable test fixtures * Fixes import order errors * Log when release file lock * Eliminate sleep in the lockutils test case (across processes) * Disable lockutils test\_synchronized\_externally * Fix import order in openstack/common/lockutils.py * Make project pyflakes clean * updating sphinx documentation * Remove unused greenthread import in lockutils * Move utils.execute to its own module * Fix missing import in lockutils * Move nova's util.synchronized decorator to openstack common oslo.concurrency-4.0.2/tox.ini0000664000175000017500000000321613643050652016365 0ustar zuulzuul00000000000000[tox] minversion = 3.2.0 envlist = py37,pep8 ignore_basepython_conflict = True [testenv] basepython = python3 deps = -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/test-requirements.txt # We want to support both vanilla stdlib and eventlet monkey patched whitelist_externals = env commands = env TEST_EVENTLET=0 lockutils-wrapper stestr run --slowest {posargs} env TEST_EVENTLET=1 lockutils-wrapper stestr run --slowest {posargs} [testenv:pep8] deps = {[testenv]deps} commands = flake8 # Run security linter bandit -r oslo_concurrency -x tests -n5 --skip B311,B404,B603,B606 [testenv:venv] commands = {posargs} [testenv:docs] whitelist_externals = rm deps = -c{env:UPPER_CONSTRAINTS_FILE:https://opendev.org/openstack/requirements/raw/branch/master/upper-constraints.txt} -r{toxinidir}/doc/requirements.txt commands = rm -fr doc/build sphinx-build -W --keep-going -b html doc/source doc/build/html {posargs} [testenv:cover] setenv = PYTHON=coverage run --source oslo_concurrency --parallel-mode commands = stestr run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml [flake8] show-source = True ignore = H405,W504 exclude=.venv,.git,.tox,dist,*lib/python*,*egg,build [hacking] import_exceptions = oslo_concurrency._i18n [testenv:releasenotes] deps = {[testenv:docs]deps} commands = sphinx-build -a -E -W -d releasenotes/build/doctrees --keep-going -b html releasenotes/source releasenotes/build/html [testenv:lower-constraints] deps = -c{toxinidir}/lower-constraints.txt -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt oslo.concurrency-4.0.2/.mailmap0000664000175000017500000000013013643050652016463 0ustar zuulzuul00000000000000# Format is: # # oslo.concurrency-4.0.2/oslo_concurrency/0000775000175000017500000000000013643050745020441 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/_i18n.py0000664000175000017500000000145213643050652021730 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_i18n _translators = oslo_i18n.TranslatorFactory(domain='oslo_concurrency') # The primary translation function using the well-known name "_" _ = _translators.primary oslo.concurrency-4.0.2/oslo_concurrency/processutils.py0000664000175000017500000005673713643050652023571 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ System-level utilities and helper functions. """ import functools import logging import multiprocessing import os import random import shlex import signal import sys import time import enum from oslo_utils import encodeutils from oslo_utils import importutils from oslo_utils import strutils from oslo_utils import timeutils import six from oslo_concurrency._i18n import _ # NOTE(bnemec): eventlet doesn't monkey patch subprocess, so we need to # determine the proper subprocess module to use ourselves. I'm using the # time module as the check because that's a monkey patched module we use # in combination with subprocess below, so they need to match. eventlet = importutils.try_import('eventlet') eventlet_patched = eventlet and eventlet.patcher.is_monkey_patched(time) if eventlet_patched: if os.name == 'nt': # subprocess.Popen.communicate will spawn two threads consuming # stdout/stderr when passing data through stdin. We need to make # sure that *native* threads will be used as pipes are blocking # on Windows. # Recent eventlet versions actually do patch subprocess. subprocess = eventlet.patcher.original('subprocess') subprocess.threading = eventlet.patcher.original('threading') else: from eventlet.green import subprocess from eventlet import tpool else: import subprocess LOG = logging.getLogger(__name__) class InvalidArgumentError(Exception): def __init__(self, message=None): super(InvalidArgumentError, self).__init__(message) class UnknownArgumentError(Exception): def __init__(self, message=None): super(UnknownArgumentError, self).__init__(message) class ProcessExecutionError(Exception): def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None, description=None): super(ProcessExecutionError, self).__init__( stdout, stderr, exit_code, cmd, description) self.exit_code = exit_code self.stderr = stderr self.stdout = stdout self.cmd = cmd self.description = description def __str__(self): description = self.description if description is None: description = _("Unexpected error while running command.") exit_code = self.exit_code if exit_code is None: exit_code = '-' message = _('%(description)s\n' 'Command: %(cmd)s\n' 'Exit code: %(exit_code)s\n' 'Stdout: %(stdout)r\n' 'Stderr: %(stderr)r') % {'description': description, 'cmd': self.cmd, 'exit_code': exit_code, 'stdout': self.stdout, 'stderr': self.stderr} return message class NoRootWrapSpecified(Exception): def __init__(self, message=None): super(NoRootWrapSpecified, self).__init__(message) def _subprocess_setup(on_preexec_fn): # Python installs a SIGPIPE handler by default. This is usually not what # non-Python subprocesses expect. signal.signal(signal.SIGPIPE, signal.SIG_DFL) if on_preexec_fn: on_preexec_fn() @enum.unique class LogErrors(enum.IntEnum): """Enumerations that affect if stdout and stderr are logged on error. .. versionadded:: 2.7 """ #: No logging on errors. DEFAULT = 0 #: Log an error on **each** occurence of an error. ALL = 1 #: Log an error on the last attempt that errored **only**. FINAL = 2 # Retain these aliases for a number of releases... LOG_ALL_ERRORS = LogErrors.ALL LOG_FINAL_ERROR = LogErrors.FINAL LOG_DEFAULT_ERROR = LogErrors.DEFAULT class ProcessLimits(object): """Resource limits on a process. Attributes: * address_space: Address space limit in bytes * core_file_size: Core file size limit in bytes * cpu_time: CPU time limit in seconds * data_size: Data size limit in bytes * file_size: File size limit in bytes * memory_locked: Locked memory limit in bytes * number_files: Maximum number of open files * number_processes: Maximum number of processes * resident_set_size: Maximum Resident Set Size (RSS) in bytes * stack_size: Stack size limit in bytes This object can be used for the *prlimit* parameter of :func:`execute`. """ _LIMITS = { "address_space": "--as", "core_file_size": "--core", "cpu_time": "--cpu", "data_size": "--data", "file_size": "--fsize", "memory_locked": "--memlock", "number_files": "--nofile", "number_processes": "--nproc", "resident_set_size": "--rss", "stack_size": "--stack", } def __init__(self, **kw): for limit in self._LIMITS: setattr(self, limit, kw.pop(limit, None)) if kw: raise ValueError("invalid limits: %s" % ', '.join(sorted(kw.keys()))) def prlimit_args(self): """Create a list of arguments for the prlimit command line.""" args = [] for limit in self._LIMITS: val = getattr(self, limit) if val is not None: args.append("%s=%s" % (self._LIMITS[limit], val)) return args def execute(*cmd, **kwargs): """Helper method to shell out and execute a command through subprocess. Allows optional retry. :param cmd: Passed to subprocess.Popen. :type cmd: string :param cwd: Set the current working directory :type cwd: string :param process_input: Send to opened process. :type process_input: string or bytes :param env_variables: Environment variables and their values that will be set for the process. :type env_variables: dict :param check_exit_code: Single bool, int, or list of allowed exit codes. Defaults to [0]. Raise :class:`ProcessExecutionError` unless program exits with one of these code. :type check_exit_code: boolean, int, or [int] :param delay_on_retry: True | False. Defaults to True. If set to True, wait a short amount of time before retrying. :type delay_on_retry: boolean :param attempts: How many times to retry cmd. :type attempts: int :param run_as_root: True | False. Defaults to False. If set to True, the command is prefixed by the command specified in the root_helper kwarg. :type run_as_root: boolean :param root_helper: command to prefix to commands called with run_as_root=True :type root_helper: string :param shell: whether or not there should be a shell used to execute this command. Defaults to false. :type shell: boolean :param loglevel: log level for execute commands. :type loglevel: int. (Should be logging.DEBUG or logging.INFO) :param log_errors: Should stdout and stderr be logged on error? Possible values are :py:attr:`~.LogErrors.DEFAULT`, :py:attr:`~.LogErrors.FINAL`, or :py:attr:`~.LogErrors.ALL`. Note that the values :py:attr:`~.LogErrors.FINAL` and :py:attr:`~.LogErrors.ALL` are **only** relevant when multiple attempts of command execution are requested using the ``attempts`` parameter. :type log_errors: :py:class:`~.LogErrors` :param binary: On Python 3, return stdout and stderr as bytes if binary is True, as Unicode otherwise. :type binary: boolean :param on_execute: This function will be called upon process creation with the object as a argument. The Purpose of this is to allow the caller of `processutils.execute` to track process creation asynchronously. :type on_execute: function(:class:`subprocess.Popen`) :param on_completion: This function will be called upon process completion with the object as a argument. The Purpose of this is to allow the caller of `processutils.execute` to track process completion asynchronously. :type on_completion: function(:class:`subprocess.Popen`) :param preexec_fn: This function will be called in the child process just before the child is executed. WARNING: On windows, we silently drop this preexec_fn as it is not supported by subprocess.Popen on windows (throws a ValueError) :type preexec_fn: function() :param prlimit: Set resource limits on the child process. See below for a detailed description. :type prlimit: :class:`ProcessLimits` :param python_exec: The python executable to use for enforcing prlimits. If this is not set it will default to use sys.executable. :type python_exec: string :returns: (stdout, stderr) from process execution :raises: :class:`UnknownArgumentError` on receiving unknown arguments :raises: :class:`ProcessExecutionError` :raises: :class:`OSError` The *prlimit* parameter can be used to set resource limits on the child process. If this parameter is used, the child process will be spawned by a wrapper process which will set limits before spawning the command. .. versionchanged:: 3.17 *process_input* can now be either bytes or string on python3. .. versionchanged:: 3.4 Added *prlimit* optional parameter. .. versionchanged:: 1.5 Added *cwd* optional parameter. .. versionchanged:: 1.9 Added *binary* optional parameter. On Python 3, *stdout* and *stderr* are now returned as Unicode strings by default, or bytes if *binary* is true. .. versionchanged:: 2.1 Added *on_execute* and *on_completion* optional parameters. .. versionchanged:: 2.3 Added *preexec_fn* optional parameter. """ cwd = kwargs.pop('cwd', None) process_input = kwargs.pop('process_input', None) if process_input is not None: process_input = encodeutils.to_utf8(process_input) env_variables = kwargs.pop('env_variables', None) check_exit_code = kwargs.pop('check_exit_code', [0]) ignore_exit_code = False delay_on_retry = kwargs.pop('delay_on_retry', True) attempts = kwargs.pop('attempts', 1) run_as_root = kwargs.pop('run_as_root', False) root_helper = kwargs.pop('root_helper', '') shell = kwargs.pop('shell', False) loglevel = kwargs.pop('loglevel', logging.DEBUG) log_errors = kwargs.pop('log_errors', None) if log_errors is None: log_errors = LogErrors.DEFAULT binary = kwargs.pop('binary', False) on_execute = kwargs.pop('on_execute', None) on_completion = kwargs.pop('on_completion', None) preexec_fn = kwargs.pop('preexec_fn', None) prlimit = kwargs.pop('prlimit', None) python_exec = kwargs.pop('python_exec', sys.executable) if isinstance(check_exit_code, bool): ignore_exit_code = not check_exit_code check_exit_code = [0] elif isinstance(check_exit_code, int): check_exit_code = [check_exit_code] if kwargs: raise UnknownArgumentError(_('Got unknown keyword args: %r') % kwargs) if isinstance(log_errors, six.integer_types): log_errors = LogErrors(log_errors) if not isinstance(log_errors, LogErrors): raise InvalidArgumentError(_('Got invalid arg log_errors: %r') % log_errors) if run_as_root and hasattr(os, 'geteuid') and os.geteuid() != 0: if not root_helper: raise NoRootWrapSpecified( message=_('Command requested root, but did not ' 'specify a root helper.')) if shell: # root helper has to be injected into the command string cmd = [' '.join((root_helper, cmd[0]))] + list(cmd[1:]) else: # root helper has to be tokenized into argument list cmd = shlex.split(root_helper) + list(cmd) cmd = [str(c) for c in cmd] if prlimit: if os.name == 'nt': LOG.log(loglevel, _('Process resource limits are ignored as ' 'this feature is not supported on Windows.')) else: args = [python_exec, '-m', 'oslo_concurrency.prlimit'] args.extend(prlimit.prlimit_args()) args.append('--') args.extend(cmd) cmd = args sanitized_cmd = strutils.mask_password(' '.join(cmd)) watch = timeutils.StopWatch() while attempts > 0: attempts -= 1 watch.restart() try: LOG.log(loglevel, _('Running cmd (subprocess): %s'), sanitized_cmd) _PIPE = subprocess.PIPE # pylint: disable=E1101 if os.name == 'nt': on_preexec_fn = None close_fds = False else: on_preexec_fn = functools.partial(_subprocess_setup, preexec_fn) close_fds = True obj = subprocess.Popen(cmd, stdin=_PIPE, stdout=_PIPE, stderr=_PIPE, close_fds=close_fds, preexec_fn=on_preexec_fn, shell=shell, # nosec:B604 cwd=cwd, env=env_variables) if on_execute: on_execute(obj) try: # eventlet.green.subprocess is not really greenthread friendly # on Windows. In order to avoid blocking other greenthreads, # we have to wrap this call using tpool. if eventlet_patched and os.name == 'nt': result = tpool.execute(obj.communicate, process_input) else: result = obj.communicate(process_input) obj.stdin.close() # pylint: disable=E1101 _returncode = obj.returncode # pylint: disable=E1101 LOG.log(loglevel, 'CMD "%s" returned: %s in %0.3fs', sanitized_cmd, _returncode, watch.elapsed()) finally: if on_completion: on_completion(obj) if not ignore_exit_code and _returncode not in check_exit_code: (stdout, stderr) = result if six.PY3: stdout = os.fsdecode(stdout) stderr = os.fsdecode(stderr) sanitized_stdout = strutils.mask_password(stdout) sanitized_stderr = strutils.mask_password(stderr) raise ProcessExecutionError(exit_code=_returncode, stdout=sanitized_stdout, stderr=sanitized_stderr, cmd=sanitized_cmd) if six.PY3 and not binary and result is not None: (stdout, stderr) = result # Decode from the locale using using the surrogateescape error # handler (decoding cannot fail) stdout = os.fsdecode(stdout) stderr = os.fsdecode(stderr) return (stdout, stderr) else: return result except (ProcessExecutionError, OSError) as err: # if we want to always log the errors or if this is # the final attempt that failed and we want to log that. if log_errors == LOG_ALL_ERRORS or ( log_errors == LOG_FINAL_ERROR and not attempts): if isinstance(err, ProcessExecutionError): format = _('%(desc)r\ncommand: %(cmd)r\n' 'exit code: %(code)r\nstdout: %(stdout)r\n' 'stderr: %(stderr)r') LOG.log(loglevel, format, {"desc": err.description, "cmd": err.cmd, "code": err.exit_code, "stdout": err.stdout, "stderr": err.stderr}) else: format = _('Got an OSError\ncommand: %(cmd)r\n' 'errno: %(errno)r') LOG.log(loglevel, format, {"cmd": sanitized_cmd, "errno": err.errno}) if not attempts: LOG.log(loglevel, _('%r failed. Not Retrying.'), sanitized_cmd) raise else: LOG.log(loglevel, _('%r failed. Retrying.'), sanitized_cmd) if delay_on_retry: time.sleep(random.randint(20, 200) / 100.0) finally: # NOTE(termie): this appears to be necessary to let the subprocess # call clean something up in between calls, without # it two execute calls in a row hangs the second one # NOTE(bnemec): termie's comment above is probably specific to the # eventlet subprocess module, but since we still # have to support that we're leaving the sleep. It # won't hurt anything in the stdlib case anyway. time.sleep(0) def trycmd(*args, **kwargs): """A wrapper around execute() to more easily handle warnings and errors. Returns an (out, err) tuple of strings containing the output of the command's stdout and stderr. If 'err' is not empty then the command can be considered to have failed. :param discard_warnings: True | False. Defaults to False. If set to True, then for succeeding commands, stderr is cleared :type discard_warnings: boolean :returns: (out, err) from process execution """ discard_warnings = kwargs.pop('discard_warnings', False) try: out, err = execute(*args, **kwargs) failed = False except ProcessExecutionError as exn: out, err = '', six.text_type(exn) failed = True if not failed and discard_warnings and err: # Handle commands that output to stderr but otherwise succeed err = '' return out, err def ssh_execute(ssh, cmd, process_input=None, addl_env=None, check_exit_code=True, binary=False, timeout=None, sanitize_stdout=True): """Run a command through SSH. :param ssh: An SSH Connection object. :param cmd: The command string to run. :param check_exit_code: If an exception should be raised for non-zero exit. :param timeout: Max time in secs to wait for command execution. :param sanitize_stdout: Defaults to True. If set to True, stdout is sanitized i.e. any sensitive information like password in command output will be masked. :returns: (stdout, stderr) from command execution through SSH. .. versionchanged:: 1.9 Added *binary* optional parameter. """ sanitized_cmd = strutils.mask_password(cmd) LOG.debug('Running cmd (SSH): %s', sanitized_cmd) if addl_env: raise InvalidArgumentError(_('Environment not supported over SSH')) if process_input: # This is (probably) fixable if we need it... raise InvalidArgumentError(_('process_input not supported over SSH')) stdin_stream, stdout_stream, stderr_stream = ssh.exec_command( cmd, timeout=timeout) channel = stdout_stream.channel # NOTE(justinsb): This seems suspicious... # ...other SSH clients have buffering issues with this approach stdout = stdout_stream.read() stderr = stderr_stream.read() stdin_stream.close() exit_status = channel.recv_exit_status() if six.PY3: # Decode from the locale using using the surrogateescape error handler # (decoding cannot fail). Decode even if binary is True because # mask_password() requires Unicode on Python 3 stdout = os.fsdecode(stdout) stderr = os.fsdecode(stderr) if sanitize_stdout: stdout = strutils.mask_password(stdout) stderr = strutils.mask_password(stderr) # exit_status == -1 if no exit code was returned if exit_status != -1: LOG.debug('Result was %s' % exit_status) if check_exit_code and exit_status != 0: # In case of errors in command run, due to poor implementation of # command executable program, there might be chance that it leaks # sensitive information like password to stdout. In such cases # stdout needs to be sanitized even though sanitize_stdout=False. stdout = strutils.mask_password(stdout) raise ProcessExecutionError(exit_code=exit_status, stdout=stdout, stderr=stderr, cmd=sanitized_cmd) if binary: if six.PY2: # On Python 2, stdout is a bytes string if mask_password() failed # to decode it, or an Unicode string otherwise. Encode to the # default encoding (ASCII) because mask_password() decodes from # the same encoding. if isinstance(stdout, six.text_type): stdout = stdout.encode() if isinstance(stderr, six.text_type): stderr = stderr.encode() else: # fsencode() is the reverse operation of fsdecode() stdout = os.fsencode(stdout) stderr = os.fsencode(stderr) return (stdout, stderr) def get_worker_count(): """Utility to get the default worker count. :returns: The number of CPUs if that can be determined, else a default worker count of 1 is returned. """ try: return multiprocessing.cpu_count() except NotImplementedError: return 1 oslo.concurrency-4.0.2/oslo_concurrency/prlimit.py0000664000175000017500000000736413643050652022502 0ustar zuulzuul00000000000000# Copyright 2016 Red Hat. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import print_function import argparse import os import resource import sys USAGE_PROGRAM = ('%s -m oslo_concurrency.prlimit' % os.path.basename(sys.executable)) RESOURCES = ( # argparse argument => resource ('as', resource.RLIMIT_AS), ('core', resource.RLIMIT_CORE), ('cpu', resource.RLIMIT_CPU), ('data', resource.RLIMIT_DATA), ('fsize', resource.RLIMIT_FSIZE), ('memlock', resource.RLIMIT_MEMLOCK), ('nofile', resource.RLIMIT_NOFILE), ('nproc', resource.RLIMIT_NPROC), ('rss', resource.RLIMIT_RSS), ('stack', resource.RLIMIT_STACK), ) def parse_args(): parser = argparse.ArgumentParser(description='prlimit', prog=USAGE_PROGRAM) parser.add_argument('--as', type=int, help='Address space limit in bytes') parser.add_argument('--core', type=int, help='Core file size limit in bytes') parser.add_argument('--cpu', type=int, help='CPU time limit in seconds') parser.add_argument('--data', type=int, help='Data size limit in bytes') parser.add_argument('--fsize', type=int, help='File size limit in bytes') parser.add_argument('--memlock', type=int, help='Locked memory limit in bytes') parser.add_argument('--nofile', type=int, help='Maximum number of open files') parser.add_argument('--nproc', type=int, help='Maximum number of processes') parser.add_argument('--rss', type=int, help='Maximum Resident Set Size (RSS) in bytes') parser.add_argument('--stack', type=int, help='Stack size limit in bytes') parser.add_argument('program', help='Program (absolute path)') parser.add_argument('program_args', metavar="arg", nargs='...', help='Program parameters') args = parser.parse_args() return args def main(): args = parse_args() program = args.program if not os.path.isabs(program): # program uses a relative path: try to find the absolute path # to the executable if sys.version_info >= (3, 3): import shutil program_abs = shutil.which(program) else: import distutils.spawn program_abs = distutils.spawn.find_executable(program) if program_abs: program = program_abs for arg_name, rlimit in RESOURCES: value = getattr(args, arg_name) if value is None: continue try: resource.setrlimit(rlimit, (value, value)) except ValueError as exc: print("%s: failed to set the %s resource limit: %s" % (USAGE_PROGRAM, arg_name.upper(), exc), file=sys.stderr) sys.exit(1) try: os.execv(program, [program] + args.program_args) except Exception as exc: print("%s: failed to execute %s: %s" % (USAGE_PROGRAM, program, exc), file=sys.stderr) sys.exit(1) if __name__ == "__main__": main() oslo.concurrency-4.0.2/oslo_concurrency/version.py0000664000175000017500000000127013643050652022475 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version version_info = pbr.version.VersionInfo('oslo.concurrency') oslo.concurrency-4.0.2/oslo_concurrency/locale/0000775000175000017500000000000013643050745021700 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/en_GB/0000775000175000017500000000000013643050745022652 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/en_GB/LC_MESSAGES/0000775000175000017500000000000013643050745024437 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/en_GB/LC_MESSAGES/oslo_concurrency.po0000664000175000017500000000543413643050652030370 0ustar zuulzuul00000000000000# Translations template for oslo.concurrency. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the oslo.concurrency # project. # # Translators: # Andi Chandler , 2014-2015 # Andreas Jaeger , 2016. #zanata # Andi Chandler , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: oslo.concurrency VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-23 11:40+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-07-11 05:04+0000\n" "Last-Translator: Andi Chandler \n" "Language: en_GB\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 4.3.3\n" "Language-Team: English (United Kingdom)\n" #, python-format msgid "" "%(desc)r\n" "command: %(cmd)r\n" "exit code: %(code)r\n" "stdout: %(stdout)r\n" "stderr: %(stderr)r" msgstr "" "%(desc)r\n" "command: %(cmd)r\n" "exit code: %(code)r\n" "stdout: %(stdout)r\n" "stderr: %(stderr)r" #, python-format msgid "" "%(description)s\n" "Command: %(cmd)s\n" "Exit code: %(exit_code)s\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" msgstr "" "%(description)s\n" "Command: %(cmd)s\n" "Exit code: %(exit_code)s\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" #, python-format msgid "%r failed. Not Retrying." msgstr "%r failed. Not Retrying." #, python-format msgid "%r failed. Retrying." msgstr "%r failed. Retrying." msgid "" "Calling lockutils directly is no longer supported. Please use the lockutils-" "wrapper console script instead." msgstr "" "Calling lockutils directly is no longer supported. Please use the lockutils-" "wrapper console script instead." msgid "Command requested root, but did not specify a root helper." msgstr "Command requested root, but did not specify a root helper." msgid "Environment not supported over SSH" msgstr "Environment not supported over SSH" #, python-format msgid "" "Got an OSError\n" "command: %(cmd)r\n" "errno: %(errno)r" msgstr "" "Got an OSError\n" "command: %(cmd)r\n" "errno: %(errno)r" #, python-format msgid "Got invalid arg log_errors: %r" msgstr "Got invalid arg log_errors: %r" #, python-format msgid "Got unknown keyword args: %r" msgstr "Got unknown keyword args: %r" msgid "" "Process resource limits are ignored as this feature is not supported on " "Windows." msgstr "" "Process resource limits are ignored as this feature is not supported on " "Windows." #, python-format msgid "Running cmd (subprocess): %s" msgstr "Running cmd (subprocess): %s" msgid "Unexpected error while running command." msgstr "Unexpected error while running command." msgid "process_input not supported over SSH" msgstr "process_input not supported over SSH" oslo.concurrency-4.0.2/oslo_concurrency/locale/fr/0000775000175000017500000000000013643050745022307 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/fr/LC_MESSAGES/0000775000175000017500000000000013643050745024074 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/fr/LC_MESSAGES/oslo_concurrency.po0000664000175000017500000000523613643050652030025 0ustar zuulzuul00000000000000# Translations template for oslo.concurrency. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the oslo.concurrency # project. # # Translators: # Maxime COQUEREL , 2015 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: oslo.concurrency 3.6.1.dev10\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2016-04-19 12:20+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2015-06-10 11:06+0000\n" "Last-Translator: openstackjenkins \n" "Language: fr\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: French\n" #, python-format msgid "" "%(desc)r\n" "command: %(cmd)r\n" "exit code: %(code)r\n" "stdout: %(stdout)r\n" "stderr: %(stderr)r" msgstr "" "%(desc)r\n" "commande: %(cmd)r\n" "Code de sortie: %(code)r\n" "stdout: %(stdout)r\n" "stderr: %(stderr)r" #, python-format msgid "" "%(description)s\n" "Command: %(cmd)s\n" "Exit code: %(exit_code)s\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" msgstr "" "%(description)s\n" "Commande: %(cmd)s\n" "Code de sortie: %(exit_code)s\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" #, python-format msgid "%r failed. Not Retrying." msgstr "Echec de %r. Nouvelle tentative." #, python-format msgid "%r failed. Retrying." msgstr "Echec de %r. Nouvelle tentative." msgid "" "Calling lockutils directly is no longer supported. Please use the lockutils-" "wrapper console script instead." msgstr "" "Lockutils appelant directement n'est plus pris en charge. Merci d'utiliser " "le script de la console lockutils -wrapper à la place." msgid "Command requested root, but did not specify a root helper." msgstr "La commande exigeait root, mais n'indiquait pas comment obtenir root." msgid "Environment not supported over SSH" msgstr "Environnement non prise en charge sur SSH" #, python-format msgid "" "Got an OSError\n" "command: %(cmd)r\n" "errno: %(errno)r" msgstr "" "Erreur du Système\n" "commande: %(cmd)r\n" "errno: %(errno)r" #, python-format msgid "Got invalid arg log_errors: %r" msgstr "Argument reçu non valide log_errors: %r" #, python-format msgid "Got unknown keyword args: %r" msgstr "Ags, mot clé inconnu: %r" #, python-format msgid "Running cmd (subprocess): %s" msgstr "Exécution de la commande (sous-processus): %s" msgid "Unexpected error while running command." msgstr "Erreur inattendue lors de l’exécution de la commande." msgid "process_input not supported over SSH" msgstr "process_input non pris en charge sur SSH" oslo.concurrency-4.0.2/oslo_concurrency/locale/de/0000775000175000017500000000000013643050745022270 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/de/LC_MESSAGES/0000775000175000017500000000000013643050745024055 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/de/LC_MESSAGES/oslo_concurrency.po0000664000175000017500000000536613643050652030012 0ustar zuulzuul00000000000000# Translations template for oslo.concurrency. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the oslo.concurrency # project. # # Translators: # Christian Berendt , 2014 # Ettore Atalan , 2014 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: oslo.concurrency 3.9.1.dev3\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2016-06-07 17:48+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-06-08 06:36+0000\n" "Last-Translator: Andreas Jaeger \n" "Language: de\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: German\n" #, python-format msgid "" "%(desc)r\n" "command: %(cmd)r\n" "exit code: %(code)r\n" "stdout: %(stdout)r\n" "stderr: %(stderr)r" msgstr "" "%(desc)r\n" "Kommando: %(cmd)r\n" "Abschlusscode: %(code)r\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" #, python-format msgid "" "%(description)s\n" "Command: %(cmd)s\n" "Exit code: %(exit_code)s\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" msgstr "" "%(description)s\n" "Befehl: %(cmd)s.\n" "Beendigungscode: %(exit_code)s.\n" "Standardausgabe: %(stdout)r\n" "Standardfehler: %(stderr)r" #, python-format msgid "%r failed. Not Retrying." msgstr "%r fehlgeschlagen. Wird nicht wiederholt." #, python-format msgid "%r failed. Retrying." msgstr "%r fehlgeschlagen. Neuversuch." msgid "" "Calling lockutils directly is no longer supported. Please use the lockutils-" "wrapper console script instead." msgstr "" "Ein direkter Aufruf von lockutils wird nicht mehr unterstützt. Verwenden Sie " "stattdessen das lockutils-wrapper Konsolescript." msgid "Command requested root, but did not specify a root helper." msgstr "Kommando braucht root, es wurde aber kein root helper spezifiziert." msgid "Environment not supported over SSH" msgstr "Umgebung wird nicht über SSH unterstützt" #, python-format msgid "" "Got an OSError\n" "command: %(cmd)r\n" "errno: %(errno)r" msgstr "" "OS Fehler aufgetreten:\n" "Kommando: %(cmd)r\n" "Fehlernummer: %(errno)r" #, python-format msgid "Got invalid arg log_errors: %r" msgstr "Ungültiges Argument für log_errors: %r" #, python-format msgid "Got unknown keyword args: %r" msgstr "Ungültige Schlüsswelwortargumente: %r" #, python-format msgid "Running cmd (subprocess): %s" msgstr "Führe Kommando (subprocess) aus: %s" msgid "Unexpected error while running command." msgstr "Unerwarteter Fehler bei der Ausführung des Kommandos." msgid "process_input not supported over SSH" msgstr "process_input wird nicht über SSH unterstützt" oslo.concurrency-4.0.2/oslo_concurrency/locale/es/0000775000175000017500000000000013643050745022307 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/es/LC_MESSAGES/0000775000175000017500000000000013643050745024074 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/locale/es/LC_MESSAGES/oslo_concurrency.po0000664000175000017500000000526013643050652030022 0ustar zuulzuul00000000000000# Translations template for oslo.concurrency. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the oslo.concurrency # project. # # Translators: # Adriana Chisco Landazábal , 2015 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: oslo.concurrency 3.6.1.dev10\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2016-04-19 12:20+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2015-06-22 09:35+0000\n" "Last-Translator: Adriana Chisco Landazábal \n" "Language: es\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.7.3\n" "Language-Team: Spanish\n" #, python-format msgid "" "%(desc)r\n" "command: %(cmd)r\n" "exit code: %(code)r\n" "stdout: %(stdout)r\n" "stderr: %(stderr)r" msgstr "" "%(desc)r\n" "comando: %(cmd)r\n" "código de salida: %(code)r\n" "stdout: %(stdout)r\n" "stderr: %(stderr)r" #, python-format msgid "" "%(description)s\n" "Command: %(cmd)s\n" "Exit code: %(exit_code)s\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" msgstr "" "%(description)s\n" "Comando: %(cmd)s\n" "Código de salida: %(exit_code)s\n" "Stdout: %(stdout)r\n" "Stderr: %(stderr)r" #, python-format msgid "%r failed. Not Retrying." msgstr "%r ha fallado. No se está intentando de nuevo." #, python-format msgid "%r failed. Retrying." msgstr "%r ha fallado. Intentando de nuevo." msgid "" "Calling lockutils directly is no longer supported. Please use the lockutils-" "wrapper console script instead." msgstr "" "Ya no se soporta llamar LockUtil. Por favor utilice a cambio la consola " "script lockutils-wrapper." msgid "Command requested root, but did not specify a root helper." msgstr "Comando ha solicitado root, pero no especificó un auxiliar root." msgid "Environment not supported over SSH" msgstr "Ambiente no soportado a través de SSH" #, python-format msgid "" "Got an OSError\n" "command: %(cmd)r\n" "errno: %(errno)r" msgstr "" "Se obtuvo error de Sistema Operativo\n" "comando: %(cmd)r\n" "errno: %(errno)r" #, python-format msgid "Got invalid arg log_errors: %r" msgstr "Se obtuvo argumento no válido: %r" #, python-format msgid "Got unknown keyword args: %r" msgstr "Se obtuvieron argumentos de palabra clave: %r" #, python-format msgid "Running cmd (subprocess): %s" msgstr "Ejecutando cmd (subproceso): %s" msgid "Unexpected error while running command." msgstr "Error inesperado mientras se ejecutaba el comando." msgid "process_input not supported over SSH" msgstr "entrada de proceso no soportada a través de SSH" oslo.concurrency-4.0.2/oslo_concurrency/fixture/0000775000175000017500000000000013643050745022127 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/fixture/lockutils.py0000664000175000017500000000554713643050652024522 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from oslo_config import fixture as config from oslo_concurrency import lockutils class LockFixture(fixtures.Fixture): """External locking fixture. This fixture is basically an alternative to the synchronized decorator with the external flag so that tearDowns and addCleanups will be included in the lock context for locking between tests. The fixture is recommended to be the first line in a test method, like so:: def test_method(self): self.useFixture(LockFixture('lock_name')) ... or the first line in setUp if all the test methods in the class are required to be serialized. Something like:: class TestCase(testtools.testcase): def setUp(self): self.useFixture(LockFixture('lock_name')) super(TestCase, self).setUp() ... This is because addCleanups are put on a LIFO queue that gets run after the test method exits. (either by completing or raising an exception) """ def __init__(self, name, lock_file_prefix=None): self.mgr = lockutils.lock(name, lock_file_prefix, True) def setUp(self): super(LockFixture, self).setUp() self.addCleanup(self.mgr.__exit__, None, None, None) self.lock = self.mgr.__enter__() class ExternalLockFixture(fixtures.Fixture): """Configure lock_path so external locks can be used in unit tests. Creates a temporary directory to hold file locks and sets the oslo.config lock_path opt to use it. This can be used to enable external locking on a per-test basis, rather than globally with the OSLO_LOCK_PATH environment variable. Example:: def test_method(self): self.useFixture(ExternalLockFixture()) something_that_needs_external_locks() Alternatively, the useFixture call could be placed in a test class's setUp method to provide this functionality to all tests in the class. .. versionadded:: 0.3 """ def setUp(self): super(ExternalLockFixture, self).setUp() temp_dir = self.useFixture(fixtures.TempDir()) conf = self.useFixture(config.Config(lockutils.CONF)).config conf(lock_path=temp_dir.path, group='oslo_concurrency') oslo.concurrency-4.0.2/oslo_concurrency/fixture/__init__.py0000664000175000017500000000000013643050652024223 0ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/lockutils.py0000664000175000017500000003570713643050652023035 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import functools import logging import os import shutil import subprocess import sys import tempfile import threading import weakref import fasteners from oslo_config import cfg from oslo_utils import reflection from oslo_utils import timeutils import six from oslo_concurrency._i18n import _ LOG = logging.getLogger(__name__) _opts = [ cfg.BoolOpt('disable_process_locking', default=False, help='Enables or disables inter-process locks.', deprecated_group='DEFAULT'), cfg.StrOpt('lock_path', default=os.environ.get("OSLO_LOCK_PATH"), help='Directory to use for lock files. For security, the ' 'specified directory should only be writable by the user ' 'running the processes that need locking. ' 'Defaults to environment variable OSLO_LOCK_PATH. ' 'If external locks are used, a lock path must be set.', deprecated_group='DEFAULT') ] def _register_opts(conf): conf.register_opts(_opts, group='oslo_concurrency') CONF = cfg.CONF _register_opts(CONF) def set_defaults(lock_path): """Set value for lock_path. This can be used by tests to set lock_path to a temporary directory. """ cfg.set_defaults(_opts, lock_path=lock_path) def get_lock_path(conf): """Return the path used for external file-based locks. :param conf: Configuration object :type conf: oslo_config.cfg.ConfigOpts .. versionadded:: 1.8 """ _register_opts(conf) return conf.oslo_concurrency.lock_path InterProcessLock = fasteners.InterProcessLock ReaderWriterLock = fasteners.ReaderWriterLock """A reader/writer lock. .. versionadded:: 0.4 """ class FairLocks(object): """A garbage collected container of fair locks. With a fair lock, contending lockers will get the lock in the order in which they tried to acquire it. This collection internally uses a weak value dictionary so that when a lock is no longer in use (by any threads) it will automatically be removed from this container by the garbage collector. """ def __init__(self): self._locks = weakref.WeakValueDictionary() self._lock = threading.Lock() def get(self, name): """Gets (or creates) a lock with a given name. :param name: The lock name to get/create (used to associate previously created names with the same lock). Returns an newly constructed lock (or an existing one if it was already created for the given name). """ with self._lock: try: return self._locks[name] except KeyError: # The fasteners module specifies that # ReaderWriterLock.write_lock() will give FIFO behaviour, # so we don't need to do anything special ourselves. rwlock = ReaderWriterLock() self._locks[name] = rwlock return rwlock _fair_locks = FairLocks() def internal_fair_lock(name): return _fair_locks.get(name) class Semaphores(object): """A garbage collected container of semaphores. This collection internally uses a weak value dictionary so that when a semaphore is no longer in use (by any threads) it will automatically be removed from this container by the garbage collector. .. versionadded:: 0.3 """ def __init__(self): self._semaphores = weakref.WeakValueDictionary() self._lock = threading.Lock() def get(self, name): """Gets (or creates) a semaphore with a given name. :param name: The semaphore name to get/create (used to associate previously created names with the same semaphore). Returns an newly constructed semaphore (or an existing one if it was already created for the given name). """ with self._lock: try: return self._semaphores[name] except KeyError: sem = threading.Semaphore() self._semaphores[name] = sem return sem def __len__(self): """Returns how many semaphores exist at the current time.""" return len(self._semaphores) _semaphores = Semaphores() def _get_lock_path(name, lock_file_prefix, lock_path=None): # NOTE(mikal): the lock name cannot contain directory # separators name = name.replace(os.sep, '_') if lock_file_prefix: sep = '' if lock_file_prefix.endswith('-') else '-' name = '%s%s%s' % (lock_file_prefix, sep, name) local_lock_path = lock_path or CONF.oslo_concurrency.lock_path if not local_lock_path: raise cfg.RequiredOptError('lock_path') return os.path.join(local_lock_path, name) def external_lock(name, lock_file_prefix=None, lock_path=None): lock_file_path = _get_lock_path(name, lock_file_prefix, lock_path) return InterProcessLock(lock_file_path) def remove_external_lock_file(name, lock_file_prefix=None, lock_path=None, semaphores=None): """Remove an external lock file when it's not used anymore This will be helpful when we have a lot of lock files """ with internal_lock(name, semaphores=semaphores): lock_file_path = _get_lock_path(name, lock_file_prefix, lock_path) try: os.remove(lock_file_path) except OSError: LOG.info('Failed to remove file %(file)s', {'file': lock_file_path}) def internal_lock(name, semaphores=None): if semaphores is None: semaphores = _semaphores return semaphores.get(name) @contextlib.contextmanager def lock(name, lock_file_prefix=None, external=False, lock_path=None, do_log=True, semaphores=None, delay=0.01, fair=False): """Context based lock This function yields a `threading.Semaphore` instance (if we don't use eventlet.monkey_patch(), else `semaphore.Semaphore`) unless external is True, in which case, it'll yield an InterProcessLock instance. :param lock_file_prefix: The lock_file_prefix argument is used to provide lock files on disk with a meaningful prefix. :param external: The external keyword argument denotes whether this lock should work across multiple processes. This means that if two different workers both run a method decorated with @synchronized('mylock', external=True), only one of them will execute at a time. :param lock_path: The path in which to store external lock files. For external locking to work properly, this must be the same for all references to the lock. :param do_log: Whether to log acquire/release messages. This is primarily intended to reduce log message duplication when `lock` is used from the `synchronized` decorator. :param semaphores: Container that provides semaphores to use when locking. This ensures that threads inside the same application can not collide, due to the fact that external process locks are unaware of a processes active threads. :param delay: Delay between acquisition attempts (in seconds). :param fair: Whether or not we want a "fair" lock where contending lockers will get the lock in the order in which they tried to acquire it. .. versionchanged:: 0.2 Added *do_log* optional parameter. .. versionchanged:: 0.3 Added *delay* and *semaphores* optional parameters. """ if fair: if semaphores is not None: raise NotImplementedError(_('Specifying semaphores is not ' 'supported when using fair locks.')) # The fastners module specifies that write_lock() provides fairness. int_lock = internal_fair_lock(name).write_lock() else: int_lock = internal_lock(name, semaphores=semaphores) with int_lock: if do_log: LOG.debug('Acquired lock "%(lock)s"', {'lock': name}) try: if external and not CONF.oslo_concurrency.disable_process_locking: ext_lock = external_lock(name, lock_file_prefix, lock_path) ext_lock.acquire(delay=delay) if do_log: LOG.debug('Acquired external semaphore "%(lock)s"', {'lock': name}) try: yield ext_lock finally: ext_lock.release() else: yield int_lock finally: if do_log: LOG.debug('Releasing lock "%(lock)s"', {'lock': name}) def lock_with_prefix(lock_file_prefix): """Partial object generator for the lock context manager. Redefine lock in each project like so:: (in nova/utils.py) from oslo_concurrency import lockutils _prefix = 'nova' lock = lockutils.lock_with_prefix(_prefix) lock_cleanup = lockutils.remove_external_lock_file_with_prefix(_prefix) (in nova/foo.py) from nova import utils with utils.lock('mylock'): ... Eventually clean up with:: lock_cleanup('mylock') :param lock_file_prefix: A string used to provide lock files on disk with a meaningful prefix. Will be separated from the lock name with a hyphen, which may optionally be included in the lock_file_prefix (e.g. ``'nova'`` and ``'nova-'`` are equivalent). """ return functools.partial(lock, lock_file_prefix=lock_file_prefix) def synchronized(name, lock_file_prefix=None, external=False, lock_path=None, semaphores=None, delay=0.01, fair=False): """Synchronization decorator. Decorating a method like so:: @synchronized('mylock') def foo(self, *args): ... ensures that only one thread will execute the foo method at a time. Different methods can share the same lock:: @synchronized('mylock') def foo(self, *args): ... @synchronized('mylock') def bar(self, *args): ... This way only one of either foo or bar can be executing at a time. .. versionchanged:: 0.3 Added *delay* and *semaphores* optional parameter. """ def wrap(f): @six.wraps(f) def inner(*args, **kwargs): t1 = timeutils.now() t2 = None try: with lock(name, lock_file_prefix, external, lock_path, do_log=False, semaphores=semaphores, delay=delay, fair=fair): t2 = timeutils.now() LOG.debug('Lock "%(name)s" acquired by "%(function)s" :: ' 'waited %(wait_secs)0.3fs', {'name': name, 'function': reflection.get_callable_name(f), 'wait_secs': (t2 - t1)}) return f(*args, **kwargs) finally: t3 = timeutils.now() if t2 is None: held_secs = "N/A" else: held_secs = "%0.3fs" % (t3 - t2) LOG.debug('Lock "%(name)s" released by "%(function)s" :: held ' '%(held_secs)s', {'name': name, 'function': reflection.get_callable_name(f), 'held_secs': held_secs}) return inner return wrap def synchronized_with_prefix(lock_file_prefix): """Partial object generator for the synchronization decorator. Redefine @synchronized in each project like so:: (in nova/utils.py) from oslo_concurrency import lockutils _prefix = 'nova' synchronized = lockutils.synchronized_with_prefix(_prefix) lock_cleanup = lockutils.remove_external_lock_file_with_prefix(_prefix) (in nova/foo.py) from nova import utils @utils.synchronized('mylock') def bar(self, *args): ... Eventually clean up with:: lock_cleanup('mylock') :param lock_file_prefix: A string used to provide lock files on disk with a meaningful prefix. Will be separated from the lock name with a hyphen, which may optionally be included in the lock_file_prefix (e.g. ``'nova'`` and ``'nova-'`` are equivalent). """ return functools.partial(synchronized, lock_file_prefix=lock_file_prefix) def remove_external_lock_file_with_prefix(lock_file_prefix): """Partial object generator for the remove lock file function. Redefine remove_external_lock_file_with_prefix in each project like so:: (in nova/utils.py) from oslo_concurrency import lockutils _prefix = 'nova' synchronized = lockutils.synchronized_with_prefix(_prefix) lock = lockutils.lock_with_prefix(_prefix) lock_cleanup = lockutils.remove_external_lock_file_with_prefix(_prefix) (in nova/foo.py) from nova import utils @utils.synchronized('mylock') def bar(self, *args): ... def baz(self, *args): ... with utils.lock('mylock'): ... ... The lock_file_prefix argument is used to provide lock files on disk with a meaningful prefix. """ return functools.partial(remove_external_lock_file, lock_file_prefix=lock_file_prefix) def _lock_wrapper(argv): """Create a dir for locks and pass it to command from arguments This is exposed as a console script entry point named lockutils-wrapper If you run this: lockutils-wrapper stestr run a temporary directory will be created for all your locks and passed to all your tests in an environment variable. The temporary dir will be deleted afterwards and the return value will be preserved. """ lock_dir = tempfile.mkdtemp() os.environ["OSLO_LOCK_PATH"] = lock_dir try: ret_val = subprocess.call(argv[1:]) finally: shutil.rmtree(lock_dir, ignore_errors=True) return ret_val def main(): sys.exit(_lock_wrapper(sys.argv)) if __name__ == '__main__': raise NotImplementedError(_('Calling lockutils directly is no longer ' 'supported. Please use the ' 'lockutils-wrapper console script instead.')) oslo.concurrency-4.0.2/oslo_concurrency/tests/0000775000175000017500000000000013643050745021603 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/tests/unit/0000775000175000017500000000000013643050745022562 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/tests/unit/test_lockutils.py0000664000175000017500000004702113643050652026205 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import multiprocessing import os import signal import subprocess import sys import tempfile import threading import time from unittest import mock from oslo_config import cfg from oslotest import base as test_base import six from oslo_concurrency.fixture import lockutils as fixtures from oslo_concurrency import lockutils from oslo_config import fixture as config if sys.platform == 'win32': import msvcrt else: import fcntl def lock_file(handle): if sys.platform == 'win32': msvcrt.locking(handle.fileno(), msvcrt.LK_NBLCK, 1) else: fcntl.flock(handle, fcntl.LOCK_EX | fcntl.LOCK_NB) def unlock_file(handle): if sys.platform == 'win32': msvcrt.locking(handle.fileno(), msvcrt.LK_UNLCK, 1) else: fcntl.flock(handle, fcntl.LOCK_UN) def lock_files(handles_dir, out_queue): with lockutils.lock('external', 'test-', external=True): # Open some files we can use for locking handles = [] for n in range(50): path = os.path.join(handles_dir, ('file-%s' % n)) handles.append(open(path, 'w')) # Loop over all the handles and try locking the file # without blocking, keep a count of how many files we # were able to lock and then unlock. If the lock fails # we get an IOError and bail out with bad exit code count = 0 for handle in handles: try: lock_file(handle) count += 1 unlock_file(handle) except IOError: os._exit(2) finally: handle.close() return out_queue.put(count) class LockTestCase(test_base.BaseTestCase): def setUp(self): super(LockTestCase, self).setUp() self.config = self.useFixture(config.Config(lockutils.CONF)).config def test_synchronized_wrapped_function_metadata(self): @lockutils.synchronized('whatever', 'test-') def foo(): """Bar.""" pass self.assertEqual('Bar.', foo.__doc__, "Wrapped function's docstring " "got lost") self.assertEqual('foo', foo.__name__, "Wrapped function's name " "got mangled") def test_lock_internally_different_collections(self): s1 = lockutils.Semaphores() s2 = lockutils.Semaphores() trigger = threading.Event() who_ran = collections.deque() def f(name, semaphores, pull_trigger): with lockutils.internal_lock('testing', semaphores=semaphores): if pull_trigger: trigger.set() else: trigger.wait() who_ran.append(name) threads = [ threading.Thread(target=f, args=(1, s1, True)), threading.Thread(target=f, args=(2, s2, False)), ] for thread in threads: thread.start() for thread in threads: thread.join() self.assertEqual([1, 2], sorted(who_ran)) def test_lock_internally(self): """We can lock across multiple threads.""" saved_sem_num = len(lockutils._semaphores) seen_threads = list() def f(_id): with lockutils.lock('testlock2', 'test-', external=False): for x in range(10): seen_threads.append(_id) threads = [] for i in range(10): thread = threading.Thread(target=f, args=(i,)) threads.append(thread) thread.start() for thread in threads: thread.join() self.assertEqual(100, len(seen_threads)) # Looking at the seen threads, split it into chunks of 10, and verify # that the last 9 match the first in each chunk. for i in range(10): for j in range(9): self.assertEqual(seen_threads[i * 10], seen_threads[i * 10 + 1 + j]) self.assertEqual(saved_sem_num, len(lockutils._semaphores), "Semaphore leak detected") def test_lock_internal_fair(self): """Check that we're actually fair.""" def f(_id): with lockutils.lock('testlock', 'test-', external=False, fair=True): lock_holder.append(_id) lock_holder = [] threads = [] # While holding the fair lock, spawn a bunch of threads that all try # to acquire the lock. They will all block. Then release the lock # and see what happens. with lockutils.lock('testlock', 'test-', external=False, fair=True): for i in range(10): thread = threading.Thread(target=f, args=(i,)) threads.append(thread) thread.start() # Allow some time for the new thread to get queued onto the # list of pending writers before continuing. This is gross # but there's no way around it without using knowledge of # fasteners internals. time.sleep(0.5) # Wait for all threads. for thread in threads: thread.join() self.assertEqual(10, len(lock_holder)) # Check that the threads each got the lock in fair order. for i in range(10): self.assertEqual(i, lock_holder[i]) def test_fair_lock_with_semaphore(self): def do_test(): s = lockutils.Semaphores() with lockutils.lock('testlock', 'test-', semaphores=s, fair=True): pass self.assertRaises(NotImplementedError, do_test) def test_nested_synchronized_external_works(self): """We can nest external syncs.""" self.config(lock_path=tempfile.mkdtemp(), group='oslo_concurrency') sentinel = object() @lockutils.synchronized('testlock1', 'test-', external=True) def outer_lock(): @lockutils.synchronized('testlock2', 'test-', external=True) def inner_lock(): return sentinel return inner_lock() self.assertEqual(sentinel, outer_lock()) def _do_test_lock_externally(self): """We can lock across multiple processes.""" children = [] for n in range(50): queue = multiprocessing.Queue() proc = multiprocessing.Process( target=lock_files, args=(tempfile.mkdtemp(), queue)) proc.start() children.append((proc, queue)) for child, queue in children: child.join() count = queue.get(block=False) self.assertEqual(50, count) def test_lock_externally(self): self.config(lock_path=tempfile.mkdtemp(), group='oslo_concurrency') self._do_test_lock_externally() def test_lock_externally_lock_dir_not_exist(self): lock_dir = tempfile.mkdtemp() os.rmdir(lock_dir) self.config(lock_path=lock_dir, group='oslo_concurrency') self._do_test_lock_externally() def test_lock_with_prefix(self): # TODO(efried): Embetter this test self.config(lock_path=tempfile.mkdtemp(), group='oslo_concurrency') foo = lockutils.lock_with_prefix('mypfix-') with foo('mylock', external=True): # We can't check much pass def test_synchronized_with_prefix(self): lock_name = 'mylock' lock_pfix = 'mypfix-' foo = lockutils.synchronized_with_prefix(lock_pfix) @foo(lock_name, external=True) def bar(dirpath, pfix, name): return True lock_dir = tempfile.mkdtemp() self.config(lock_path=lock_dir, group='oslo_concurrency') self.assertTrue(bar(lock_dir, lock_pfix, lock_name)) def test_synchronized_without_prefix(self): self.config(lock_path=tempfile.mkdtemp(), group='oslo_concurrency') @lockutils.synchronized('lock', external=True) def test_without_prefix(): # We can't check much pass test_without_prefix() def test_synchronized_prefix_without_hypen(self): self.config(lock_path=tempfile.mkdtemp(), group='oslo_concurrency') @lockutils.synchronized('lock', 'hypen', True) def test_without_hypen(): # We can't check much pass test_without_hypen() def test_contextlock(self): self.config(lock_path=tempfile.mkdtemp(), group='oslo_concurrency') # Note(flaper87): Lock is not external, which means # a semaphore will be yielded with lockutils.lock("test") as sem: if six.PY2: self.assertIsInstance(sem, threading._Semaphore) else: self.assertIsInstance(sem, threading.Semaphore) # NOTE(flaper87): Lock is external so an InterProcessLock # will be yielded. with lockutils.lock("test2", external=True) as lock: self.assertTrue(lock.exists()) with lockutils.lock("test1", external=True) as lock1: self.assertIsInstance(lock1, lockutils.InterProcessLock) def test_contextlock_unlocks(self): self.config(lock_path=tempfile.mkdtemp(), group='oslo_concurrency') with lockutils.lock("test") as sem: if six.PY2: self.assertIsInstance(sem, threading._Semaphore) else: self.assertIsInstance(sem, threading.Semaphore) with lockutils.lock("test2", external=True) as lock: self.assertTrue(lock.exists()) # NOTE(flaper87): Lock should be free with lockutils.lock("test2", external=True) as lock: self.assertTrue(lock.exists()) # NOTE(flaper87): Lock should be free # but semaphore should already exist. with lockutils.lock("test") as sem2: self.assertEqual(sem, sem2) @mock.patch('logging.Logger.info') @mock.patch('os.remove') @mock.patch('oslo_concurrency.lockutils._get_lock_path') def test_remove_lock_external_file_exists(self, path_mock, remove_mock, log_mock): lockutils.remove_external_lock_file(mock.sentinel.name, mock.sentinel.prefix, mock.sentinel.lock_path) path_mock.assert_called_once_with(mock.sentinel.name, mock.sentinel.prefix, mock.sentinel.lock_path) remove_mock.assert_called_once_with(path_mock.return_value) log_mock.assert_not_called() @mock.patch('logging.Logger.info') @mock.patch('os.remove', side_effect=OSError) @mock.patch('oslo_concurrency.lockutils._get_lock_path') def test_remove_lock_external_file_doesnt_exists(self, path_mock, remove_mock, log_mock): lockutils.remove_external_lock_file(mock.sentinel.name, mock.sentinel.prefix, mock.sentinel.lock_path) path_mock.assert_called_once_with(mock.sentinel.name, mock.sentinel.prefix, mock.sentinel.lock_path) remove_mock.assert_called_once_with(path_mock.return_value) log_mock.assert_called() def test_no_slash_in_b64(self): # base64(sha1(foobar)) has a slash in it with lockutils.lock("foobar"): pass def test_deprecated_names(self): paths = self.create_tempfiles([['fake.conf', '\n'.join([ '[DEFAULT]', 'lock_path=foo', 'disable_process_locking=True']) ]]) conf = cfg.ConfigOpts() conf(['--config-file', paths[0]]) conf.register_opts(lockutils._opts, 'oslo_concurrency') self.assertEqual('foo', conf.oslo_concurrency.lock_path) self.assertTrue(conf.oslo_concurrency.disable_process_locking) class FileBasedLockingTestCase(test_base.BaseTestCase): def setUp(self): super(FileBasedLockingTestCase, self).setUp() self.lock_dir = tempfile.mkdtemp() def test_lock_file_exists(self): lock_file = os.path.join(self.lock_dir, 'lock-file') @lockutils.synchronized('lock-file', external=True, lock_path=self.lock_dir) def foo(): self.assertTrue(os.path.exists(lock_file)) foo() def test_interprocess_lock(self): lock_file = os.path.join(self.lock_dir, 'processlock') pid = os.fork() if pid: # Make sure the child grabs the lock first start = time.time() while not os.path.exists(lock_file): if time.time() - start > 5: self.fail('Timed out waiting for child to grab lock') time.sleep(0) lock1 = lockutils.InterProcessLock('foo') lock1.lockfile = open(lock_file, 'w') # NOTE(bnemec): There is a brief window between when the lock file # is created and when it actually becomes locked. If we happen to # context switch in that window we may succeed in locking the # file. Keep retrying until we either get the expected exception # or timeout waiting. while time.time() - start < 5: try: lock1.trylock() lock1.unlock() time.sleep(0) except IOError: # This is what we expect to happen break else: self.fail('Never caught expected lock exception') # We don't need to wait for the full sleep in the child here os.kill(pid, signal.SIGKILL) else: try: lock2 = lockutils.InterProcessLock('foo') lock2.lockfile = open(lock_file, 'w') have_lock = False while not have_lock: try: lock2.trylock() have_lock = True except IOError: pass finally: # NOTE(bnemec): This is racy, but I don't want to add any # synchronization primitives that might mask a problem # with the one we're trying to test here. time.sleep(.5) os._exit(0) def test_interthread_external_lock(self): call_list = [] @lockutils.synchronized('foo', external=True, lock_path=self.lock_dir) def foo(param): """Simulate a long-running threaded operation.""" call_list.append(param) # NOTE(bnemec): This is racy, but I don't want to add any # synchronization primitives that might mask a problem # with the one we're trying to test here. time.sleep(.5) call_list.append(param) def other(param): foo(param) thread = threading.Thread(target=other, args=('other',)) thread.start() # Make sure the other thread grabs the lock # NOTE(bnemec): File locks do not actually work between threads, so # this test is verifying that the local semaphore is still enforcing # external locks in that case. This means this test does not have # the same race problem as the process test above because when the # file is created the semaphore has already been grabbed. start = time.time() while not os.path.exists(os.path.join(self.lock_dir, 'foo')): if time.time() - start > 5: self.fail('Timed out waiting for thread to grab lock') time.sleep(0) thread1 = threading.Thread(target=other, args=('main',)) thread1.start() thread1.join() thread.join() self.assertEqual(['other', 'other', 'main', 'main'], call_list) def test_non_destructive(self): lock_file = os.path.join(self.lock_dir, 'not-destroyed') with open(lock_file, 'w') as f: f.write('test') with lockutils.lock('not-destroyed', external=True, lock_path=self.lock_dir): with open(lock_file) as f: self.assertEqual('test', f.read()) class LockutilsModuleTestCase(test_base.BaseTestCase): def setUp(self): super(LockutilsModuleTestCase, self).setUp() self.old_env = os.environ.get('OSLO_LOCK_PATH') if self.old_env is not None: del os.environ['OSLO_LOCK_PATH'] def tearDown(self): if self.old_env is not None: os.environ['OSLO_LOCK_PATH'] = self.old_env super(LockutilsModuleTestCase, self).tearDown() def test_main(self): script = '\n'.join([ 'import os', 'lock_path = os.environ.get("OSLO_LOCK_PATH")', 'assert lock_path is not None', 'assert os.path.isdir(lock_path)', ]) argv = ['', sys.executable, '-c', script] retval = lockutils._lock_wrapper(argv) self.assertEqual(0, retval, "Bad OSLO_LOCK_PATH has been set") def test_return_value_maintained(self): script = '\n'.join([ 'import sys', 'sys.exit(1)', ]) argv = ['', sys.executable, '-c', script] retval = lockutils._lock_wrapper(argv) self.assertEqual(1, retval) def test_direct_call_explodes(self): cmd = [sys.executable, '-m', 'oslo_concurrency.lockutils'] with open(os.devnull, 'w') as devnull: retval = subprocess.call(cmd, stderr=devnull) self.assertEqual(1, retval) class TestLockFixture(test_base.BaseTestCase): def setUp(self): super(TestLockFixture, self).setUp() self.config = self.useFixture(config.Config(lockutils.CONF)).config self.tempdir = tempfile.mkdtemp() def _check_in_lock(self): self.assertTrue(self.lock.exists()) def tearDown(self): self._check_in_lock() super(TestLockFixture, self).tearDown() def test_lock_fixture(self): # Setup lock fixture to test that teardown is inside the lock self.config(lock_path=self.tempdir, group='oslo_concurrency') fixture = fixtures.LockFixture('test-lock') self.useFixture(fixture) self.lock = fixture.lock class TestGetLockPath(test_base.BaseTestCase): def setUp(self): super(TestGetLockPath, self).setUp() self.conf = self.useFixture(config.Config(lockutils.CONF)).conf def test_get_default(self): lockutils.set_defaults(lock_path='/the/path') self.assertEqual('/the/path', lockutils.get_lock_path(self.conf)) def test_get_override(self): lockutils._register_opts(self.conf) self.conf.set_override('lock_path', '/alternate/path', group='oslo_concurrency') self.assertEqual('/alternate/path', lockutils.get_lock_path(self.conf)) oslo.concurrency-4.0.2/oslo_concurrency/tests/unit/test_lockutils_eventlet.py0000664000175000017500000000316213643050652030111 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import eventlet from eventlet import greenpool from oslotest import base as test_base from oslo_concurrency import lockutils class TestFileLocks(test_base.BaseTestCase): def test_concurrent_green_lock_succeeds(self): """Verify spawn_n greenthreads with two locks run concurrently.""" tmpdir = tempfile.mkdtemp() self.completed = False def locka(wait): a = lockutils.InterProcessLock(os.path.join(tmpdir, 'a')) with a: wait.wait() self.completed = True def lockb(wait): b = lockutils.InterProcessLock(os.path.join(tmpdir, 'b')) with b: wait.wait() wait1 = eventlet.event.Event() wait2 = eventlet.event.Event() pool = greenpool.GreenPool() pool.spawn_n(locka, wait1) pool.spawn_n(lockb, wait2) wait2.send() eventlet.sleep(0) wait1.send() pool.waitall() self.assertTrue(self.completed) oslo.concurrency-4.0.2/oslo_concurrency/tests/unit/__init__.py0000664000175000017500000000000013643050652024656 0ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/tests/unit/test_processutils.py0000664000175000017500000011537113643050652026737 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import print_function import errno import logging import multiprocessing import os import pickle import resource import socket import stat import subprocess import sys import tempfile from unittest import mock import fixtures from oslotest import base as test_base import six from oslo_concurrency import processutils PROCESS_EXECUTION_ERROR_LOGGING_TEST = """#!/bin/bash exit 41""" TEST_EXCEPTION_AND_MASKING_SCRIPT = """#!/bin/bash # This is to test stdout and stderr # and the command returned in an exception # when a non-zero exit code is returned echo onstdout --password='"secret"' echo onstderr --password='"secret"' 1>&2 exit 38""" # This byte sequence is undecodable from most encoding UNDECODABLE_BYTES = b'[a\x80\xe9\xff]' TRUE_UTILITY = (sys.platform.startswith('darwin') and '/usr/bin/true' or '/bin/true') class UtilsTest(test_base.BaseTestCase): # NOTE(jkoelker) Moar tests from nova need to be ported. But they # need to be mock'd out. Currently they require actually # running code. def test_execute_unknown_kwargs(self): self.assertRaises(processutils.UnknownArgumentError, processutils.execute, hozer=True) @mock.patch.object(multiprocessing, 'cpu_count', return_value=8) def test_get_worker_count(self, mock_cpu_count): self.assertEqual(8, processutils.get_worker_count()) @mock.patch.object(multiprocessing, 'cpu_count', side_effect=NotImplementedError()) def test_get_worker_count_cpu_count_not_implemented(self, mock_cpu_count): self.assertEqual(1, processutils.get_worker_count()) def test_execute_with_callback(self): on_execute_callback = mock.Mock() on_completion_callback = mock.Mock() processutils.execute(TRUE_UTILITY) self.assertEqual(0, on_execute_callback.call_count) self.assertEqual(0, on_completion_callback.call_count) processutils.execute(TRUE_UTILITY, on_execute=on_execute_callback, on_completion=on_completion_callback) self.assertEqual(1, on_execute_callback.call_count) self.assertEqual(1, on_completion_callback.call_count) @mock.patch.object(subprocess.Popen, "communicate") def test_execute_with_callback_and_errors(self, mock_comm): on_execute_callback = mock.Mock() on_completion_callback = mock.Mock() def fake_communicate(*args): raise IOError("Broken pipe") mock_comm.side_effect = fake_communicate self.assertRaises(IOError, processutils.execute, TRUE_UTILITY, on_execute=on_execute_callback, on_completion=on_completion_callback) self.assertEqual(1, on_execute_callback.call_count) self.assertEqual(1, on_completion_callback.call_count) def test_execute_with_preexec_fn(self): # NOTE(dims): preexec_fn is set to a callable object, this object # will be called in the child process just before the child is # executed. So we cannot pass share variables etc, simplest is to # check if a specific exception is thrown which can be caught here. def preexec_fn(): raise processutils.InvalidArgumentError() processutils.execute(TRUE_UTILITY) if six.PY2: self.assertRaises(processutils.InvalidArgumentError, processutils.execute, TRUE_UTILITY, preexec_fn=preexec_fn) else: try: processutils.execute(TRUE_UTILITY, preexec_fn=preexec_fn) except Exception as e: if type(e).__name__ != 'SubprocessError': raise @mock.patch.object(os, 'name', 'nt') @mock.patch.object(processutils.subprocess, "Popen") @mock.patch.object(processutils, 'tpool', create=True) def _test_windows_execute(self, mock_tpool, mock_popen, use_eventlet=False): # We want to ensure that if eventlet is used on Windows, # 'communicate' calls are wrapped with eventlet.tpool.execute. mock_comm = mock_popen.return_value.communicate mock_comm.return_value = None mock_tpool.execute.return_value = mock_comm.return_value fake_pinput = 'fake pinput'.encode('utf-8') with mock.patch.object(processutils, 'eventlet_patched', use_eventlet): processutils.execute( TRUE_UTILITY, process_input=fake_pinput, check_exit_code=False) mock_popen.assert_called_once_with( [TRUE_UTILITY], stdin=mock.ANY, stdout=mock.ANY, stderr=mock.ANY, close_fds=mock.ANY, preexec_fn=mock.ANY, shell=mock.ANY, cwd=mock.ANY, env=mock.ANY) if use_eventlet: mock_tpool.execute.assert_called_once_with( mock_comm, fake_pinput) else: mock_comm.assert_called_once_with(fake_pinput) def test_windows_execute_without_eventlet(self): self._test_windows_execute() def test_windows_execute_using_eventlet(self): self._test_windows_execute(use_eventlet=True) class ProcessExecutionErrorTest(test_base.BaseTestCase): def test_defaults(self): err = processutils.ProcessExecutionError() self.assertIn('None\n', six.text_type(err)) self.assertIn('code: -\n', six.text_type(err)) def test_with_description(self): description = 'The Narwhal Bacons at Midnight' err = processutils.ProcessExecutionError(description=description) self.assertIn(description, six.text_type(err)) def test_with_exit_code(self): exit_code = 0 err = processutils.ProcessExecutionError(exit_code=exit_code) self.assertIn(str(exit_code), six.text_type(err)) def test_with_cmd(self): cmd = 'telinit' err = processutils.ProcessExecutionError(cmd=cmd) self.assertIn(cmd, six.text_type(err)) def test_with_stdout(self): stdout = """ Lo, praise of the prowess of people-kings of spear-armed Danes, in days long sped, we have heard, and what honor the athelings won! Oft Scyld the Scefing from squadroned foes, from many a tribe, the mead-bench tore, awing the earls. Since erst he lay friendless, a foundling, fate repaid him: for he waxed under welkin, in wealth he throve, till before him the folk, both far and near, who house by the whale-path, heard his mandate, gave him gifts: a good king he! To him an heir was afterward born, a son in his halls, whom heaven sent to favor the folk, feeling their woe that erst they had lacked an earl for leader so long a while; the Lord endowed him, the Wielder of Wonder, with world's renown. """.strip() err = processutils.ProcessExecutionError(stdout=stdout) print(six.text_type(err)) self.assertIn('people-kings', six.text_type(err)) def test_with_stderr(self): stderr = 'Cottonian library' err = processutils.ProcessExecutionError(stderr=stderr) self.assertIn(stderr, six.text_type(err)) def test_retry_on_failure(self): fd, tmpfilename = tempfile.mkstemp() _, tmpfilename2 = tempfile.mkstemp() try: fp = os.fdopen(fd, 'w+') fp.write('''#!/bin/sh # If stdin fails to get passed during one of the runs, make a note. if ! grep -q foo then echo 'failure' > "$1" fi # If stdin has failed to get passed during this or a previous run, exit early. if grep failure "$1" then exit 1 fi runs="$(cat $1)" if [ -z "$runs" ] then runs=0 fi runs=$(($runs + 1)) echo $runs > "$1" exit 1 ''') fp.close() os.chmod(tmpfilename, 0o755) self.assertRaises(processutils.ProcessExecutionError, processutils.execute, tmpfilename, tmpfilename2, attempts=10, process_input=b'foo', delay_on_retry=False) fp = open(tmpfilename2, 'r') runs = fp.read() fp.close() self.assertNotEqual('failure', 'stdin did not ' 'always get passed ' 'correctly', runs.strip()) runs = int(runs.strip()) self.assertEqual(10, runs, 'Ran %d times instead of 10.' % (runs,)) finally: os.unlink(tmpfilename) os.unlink(tmpfilename2) def test_unknown_kwargs_raises_error(self): self.assertRaises(processutils.UnknownArgumentError, processutils.execute, '/usr/bin/env', 'true', this_is_not_a_valid_kwarg=True) def test_check_exit_code_boolean(self): processutils.execute('/usr/bin/env', 'false', check_exit_code=False) self.assertRaises(processutils.ProcessExecutionError, processutils.execute, '/usr/bin/env', 'false', check_exit_code=True) def test_check_cwd(self): tmpdir = tempfile.mkdtemp() out, err = processutils.execute('/usr/bin/env', 'sh', '-c', 'pwd', cwd=tmpdir) self.assertIn(tmpdir, out) def test_process_input_with_string(self): code = ';'.join(('import sys', 'print(len(sys.stdin.readlines()))')) args = [sys.executable, '-c', code] input = "\n".join(['foo', 'bar', 'baz']) stdout, stderr = processutils.execute(*args, process_input=input) self.assertEqual("3", stdout.rstrip()) def test_check_exit_code_list(self): processutils.execute('/usr/bin/env', 'sh', '-c', 'exit 101', check_exit_code=(101, 102)) processutils.execute('/usr/bin/env', 'sh', '-c', 'exit 102', check_exit_code=(101, 102)) self.assertRaises(processutils.ProcessExecutionError, processutils.execute, '/usr/bin/env', 'sh', '-c', 'exit 103', check_exit_code=(101, 102)) self.assertRaises(processutils.ProcessExecutionError, processutils.execute, '/usr/bin/env', 'sh', '-c', 'exit 0', check_exit_code=(101, 102)) def test_no_retry_on_success(self): fd, tmpfilename = tempfile.mkstemp() _, tmpfilename2 = tempfile.mkstemp() try: fp = os.fdopen(fd, 'w+') fp.write("""#!/bin/sh # If we've already run, bail out. grep -q foo "$1" && exit 1 # Mark that we've run before. echo foo > "$1" # Check that stdin gets passed correctly. grep foo """) fp.close() os.chmod(tmpfilename, 0o755) processutils.execute(tmpfilename, tmpfilename2, process_input=b'foo', attempts=2) finally: os.unlink(tmpfilename) os.unlink(tmpfilename2) # This test and the one below ensures that when communicate raises # an OSError, we do the right thing(s) def test_exception_on_communicate_error(self): mock = self.useFixture(fixtures.MockPatch( 'subprocess.Popen.communicate', side_effect=OSError(errno.EAGAIN, 'fake-test'))) self.assertRaises(OSError, processutils.execute, '/usr/bin/env', 'false', check_exit_code=False) self.assertEqual(1, mock.mock.call_count) def test_retry_on_communicate_error(self): mock = self.useFixture(fixtures.MockPatch( 'subprocess.Popen.communicate', side_effect=OSError(errno.EAGAIN, 'fake-test'))) self.assertRaises(OSError, processutils.execute, '/usr/bin/env', 'false', check_exit_code=False, attempts=5) self.assertEqual(5, mock.mock.call_count) def _test_and_check_logging_communicate_errors(self, log_errors=None, attempts=None): mock = self.useFixture(fixtures.MockPatch( 'subprocess.Popen.communicate', side_effect=OSError(errno.EAGAIN, 'fake-test'))) fixture = self.useFixture(fixtures.FakeLogger(level=logging.DEBUG)) kwargs = {} if log_errors: kwargs.update({"log_errors": log_errors}) if attempts: kwargs.update({"attempts": attempts}) self.assertRaises(OSError, processutils.execute, '/usr/bin/env', 'false', **kwargs) self.assertEqual(attempts if attempts else 1, mock.mock.call_count) self.assertIn('Got an OSError', fixture.output) self.assertIn('errno: %d' % errno.EAGAIN, fixture.output) self.assertIn("'/usr/bin/env false'", fixture.output) def test_logging_on_communicate_error_1(self): self._test_and_check_logging_communicate_errors( log_errors=processutils.LOG_FINAL_ERROR, attempts=None) def test_logging_on_communicate_error_2(self): self._test_and_check_logging_communicate_errors( log_errors=processutils.LOG_FINAL_ERROR, attempts=1) def test_logging_on_communicate_error_3(self): self._test_and_check_logging_communicate_errors( log_errors=processutils.LOG_FINAL_ERROR, attempts=5) def test_logging_on_communicate_error_4(self): self._test_and_check_logging_communicate_errors( log_errors=processutils.LOG_ALL_ERRORS, attempts=None) def test_logging_on_communicate_error_5(self): self._test_and_check_logging_communicate_errors( log_errors=processutils.LOG_ALL_ERRORS, attempts=1) def test_logging_on_communicate_error_6(self): self._test_and_check_logging_communicate_errors( log_errors=processutils.LOG_ALL_ERRORS, attempts=5) def test_with_env_variables(self): env_vars = {'SUPER_UNIQUE_VAR': 'The answer is 42'} out, err = processutils.execute('/usr/bin/env', env_variables=env_vars) self.assertIsInstance(out, str) self.assertIsInstance(err, str) self.assertIn('SUPER_UNIQUE_VAR=The answer is 42', out) def test_binary(self): env_vars = {'SUPER_UNIQUE_VAR': 'The answer is 42'} out, err = processutils.execute('/usr/bin/env', env_variables=env_vars, binary=True) self.assertIsInstance(out, bytes) self.assertIsInstance(err, bytes) self.assertIn(b'SUPER_UNIQUE_VAR=The answer is 42', out) def test_exception_and_masking(self): tmpfilename = self.create_tempfiles( [["test_exceptions_and_masking", TEST_EXCEPTION_AND_MASKING_SCRIPT]], ext='bash')[0] os.chmod(tmpfilename, (stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH)) err = self.assertRaises(processutils.ProcessExecutionError, processutils.execute, tmpfilename, 'password="secret"', 'something') self.assertEqual(38, err.exit_code) self.assertIsInstance(err.stdout, six.text_type) self.assertIsInstance(err.stderr, six.text_type) self.assertIn('onstdout --password="***"', err.stdout) self.assertIn('onstderr --password="***"', err.stderr) self.assertEqual(' '.join([tmpfilename, 'password="***"', 'something']), err.cmd) self.assertNotIn('secret', str(err)) def execute_undecodable_bytes(self, out_bytes, err_bytes, exitcode=0, binary=False): if six.PY3: code = ';'.join(('import sys', 'sys.stdout.buffer.write(%a)' % out_bytes, 'sys.stdout.flush()', 'sys.stderr.buffer.write(%a)' % err_bytes, 'sys.stderr.flush()', 'sys.exit(%s)' % exitcode)) else: code = ';'.join(('import sys', 'sys.stdout.write(%r)' % out_bytes, 'sys.stdout.flush()', 'sys.stderr.write(%r)' % err_bytes, 'sys.stderr.flush()', 'sys.exit(%s)' % exitcode)) return processutils.execute(sys.executable, '-c', code, binary=binary) def check_undecodable_bytes(self, binary): out_bytes = b'out: ' + UNDECODABLE_BYTES err_bytes = b'err: ' + UNDECODABLE_BYTES out, err = self.execute_undecodable_bytes(out_bytes, err_bytes, binary=binary) if six.PY3 and not binary: self.assertEqual(os.fsdecode(out_bytes), out) self.assertEqual(os.fsdecode(err_bytes), err) else: self.assertEqual(out, out_bytes) self.assertEqual(err, err_bytes) def test_undecodable_bytes(self): self.check_undecodable_bytes(False) def test_binary_undecodable_bytes(self): self.check_undecodable_bytes(True) def check_undecodable_bytes_error(self, binary): out_bytes = b'out: password="secret1" ' + UNDECODABLE_BYTES err_bytes = b'err: password="secret2" ' + UNDECODABLE_BYTES exc = self.assertRaises(processutils.ProcessExecutionError, self.execute_undecodable_bytes, out_bytes, err_bytes, exitcode=1, binary=binary) out = exc.stdout err = exc.stderr out_bytes = b'out: password="***" ' + UNDECODABLE_BYTES err_bytes = b'err: password="***" ' + UNDECODABLE_BYTES if six.PY3: # On Python 3, stdout and stderr attributes of # ProcessExecutionError must always be Unicode self.assertEqual(os.fsdecode(out_bytes), out) self.assertEqual(os.fsdecode(err_bytes), err) else: # On Python 2, stdout and stderr attributes of # ProcessExecutionError must always be bytes self.assertEqual(out_bytes, out) self.assertEqual(err_bytes, err) def test_undecodable_bytes_error(self): self.check_undecodable_bytes_error(False) def test_binary_undecodable_bytes_error(self): self.check_undecodable_bytes_error(True) def test_picklable(self): exc = processutils.ProcessExecutionError( stdout='my stdout', stderr='my stderr', exit_code=42, cmd='my cmd', description='my description') exc_message = str(exc) exc = pickle.loads(pickle.dumps(exc)) self.assertEqual('my stdout', exc.stdout) self.assertEqual('my stderr', exc.stderr) self.assertEqual(42, exc.exit_code) self.assertEqual('my cmd', exc.cmd) self.assertEqual('my description', exc.description) self.assertEqual(str(exc), exc_message) class ProcessExecutionErrorLoggingTest(test_base.BaseTestCase): def setUp(self): super(ProcessExecutionErrorLoggingTest, self).setUp() self.tmpfilename = self.create_tempfiles( [["process_execution_error_logging_test", PROCESS_EXECUTION_ERROR_LOGGING_TEST]], ext='bash')[0] os.chmod(self.tmpfilename, (stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH)) def _test_and_check(self, log_errors=None, attempts=None): fixture = self.useFixture(fixtures.FakeLogger(level=logging.DEBUG)) kwargs = {} if log_errors: kwargs.update({"log_errors": log_errors}) if attempts: kwargs.update({"attempts": attempts}) err = self.assertRaises(processutils.ProcessExecutionError, processutils.execute, self.tmpfilename, **kwargs) self.assertEqual(41, err.exit_code) self.assertIn(self.tmpfilename, fixture.output) def test_with_invalid_log_errors(self): self.assertRaises(processutils.InvalidArgumentError, processutils.execute, self.tmpfilename, log_errors='invalid') def test_with_log_errors_NONE(self): self._test_and_check(log_errors=None, attempts=None) def test_with_log_errors_final(self): self._test_and_check(log_errors=processutils.LOG_FINAL_ERROR, attempts=None) def test_with_log_errors_all(self): self._test_and_check(log_errors=processutils.LOG_ALL_ERRORS, attempts=None) def test_multiattempt_with_log_errors_NONE(self): self._test_and_check(log_errors=None, attempts=3) def test_multiattempt_with_log_errors_final(self): self._test_and_check(log_errors=processutils.LOG_FINAL_ERROR, attempts=3) def test_multiattempt_with_log_errors_all(self): self._test_and_check(log_errors=processutils.LOG_ALL_ERRORS, attempts=3) def fake_execute(*cmd, **kwargs): return 'stdout', 'stderr' def fake_execute_raises(*cmd, **kwargs): raise processutils.ProcessExecutionError(exit_code=42, stdout='stdout', stderr='stderr', cmd=['this', 'is', 'a', 'command']) class TryCmdTestCase(test_base.BaseTestCase): def test_keep_warnings(self): self.useFixture(fixtures.MonkeyPatch( 'oslo_concurrency.processutils.execute', fake_execute)) o, e = processutils.trycmd('this is a command'.split(' ')) self.assertNotEqual('', o) self.assertNotEqual('', e) def test_keep_warnings_from_raise(self): self.useFixture(fixtures.MonkeyPatch( 'oslo_concurrency.processutils.execute', fake_execute_raises)) o, e = processutils.trycmd('this is a command'.split(' '), discard_warnings=True) self.assertIsNotNone(o) self.assertNotEqual('', e) def test_discard_warnings(self): self.useFixture(fixtures.MonkeyPatch( 'oslo_concurrency.processutils.execute', fake_execute)) o, e = processutils.trycmd('this is a command'.split(' '), discard_warnings=True) self.assertIsNotNone(o) self.assertEqual('', e) class FakeSshChannel(object): def __init__(self, rc): self.rc = rc def recv_exit_status(self): return self.rc class FakeSshStream(six.BytesIO): def setup_channel(self, rc): self.channel = FakeSshChannel(rc) class FakeSshConnection(object): def __init__(self, rc, out=b'stdout', err=b'stderr'): self.rc = rc self.out = out self.err = err def exec_command(self, cmd, timeout=None): if timeout: raise socket.timeout() stdout = FakeSshStream(self.out) stdout.setup_channel(self.rc) return (six.BytesIO(), stdout, six.BytesIO(self.err)) class SshExecuteTestCase(test_base.BaseTestCase): def test_invalid_addl_env(self): self.assertRaises(processutils.InvalidArgumentError, processutils.ssh_execute, None, 'ls', addl_env='important') def test_invalid_process_input(self): self.assertRaises(processutils.InvalidArgumentError, processutils.ssh_execute, None, 'ls', process_input='important') def test_timeout_error(self): self.assertRaises(socket.timeout, processutils.ssh_execute, FakeSshConnection(0), 'ls', timeout=10) def test_works(self): out, err = processutils.ssh_execute(FakeSshConnection(0), 'ls') self.assertEqual('stdout', out) self.assertEqual('stderr', err) self.assertIsInstance(out, six.text_type) self.assertIsInstance(err, six.text_type) def test_binary(self): o, e = processutils.ssh_execute(FakeSshConnection(0), 'ls', binary=True) self.assertEqual(b'stdout', o) self.assertEqual(b'stderr', e) self.assertIsInstance(o, bytes) self.assertIsInstance(e, bytes) def check_undecodable_bytes(self, binary): out_bytes = b'out: ' + UNDECODABLE_BYTES err_bytes = b'err: ' + UNDECODABLE_BYTES conn = FakeSshConnection(0, out=out_bytes, err=err_bytes) out, err = processutils.ssh_execute(conn, 'ls', binary=binary) if six.PY3 and not binary: self.assertEqual(os.fsdecode(out_bytes), out) self.assertEqual(os.fsdecode(err_bytes), err) else: self.assertEqual(out_bytes, out) self.assertEqual(err_bytes, err) def test_undecodable_bytes(self): self.check_undecodable_bytes(False) def test_binary_undecodable_bytes(self): self.check_undecodable_bytes(True) def check_undecodable_bytes_error(self, binary): out_bytes = b'out: password="secret1" ' + UNDECODABLE_BYTES err_bytes = b'err: password="secret2" ' + UNDECODABLE_BYTES conn = FakeSshConnection(1, out=out_bytes, err=err_bytes) out_bytes = b'out: password="***" ' + UNDECODABLE_BYTES err_bytes = b'err: password="***" ' + UNDECODABLE_BYTES exc = self.assertRaises(processutils.ProcessExecutionError, processutils.ssh_execute, conn, 'ls', binary=binary, check_exit_code=True) out = exc.stdout err = exc.stderr if six.PY3: # On Python 3, stdout and stderr attributes of # ProcessExecutionError must always be Unicode self.assertEqual(os.fsdecode(out_bytes), out) self.assertEqual(os.fsdecode(err_bytes), err) else: # On Python 2, stdout and stderr attributes of # ProcessExecutionError must always be bytes self.assertEqual(out_bytes, out) self.assertEqual(err_bytes, err) def test_undecodable_bytes_error(self): self.check_undecodable_bytes_error(False) def test_binary_undecodable_bytes_error(self): self.check_undecodable_bytes_error(True) def test_fails(self): self.assertRaises(processutils.ProcessExecutionError, processutils.ssh_execute, FakeSshConnection(1), 'ls') def _test_compromising_ssh(self, rc, check): fixture = self.useFixture(fixtures.FakeLogger(level=logging.DEBUG)) fake_stdin = six.BytesIO() fake_stdout = mock.Mock() fake_stdout.channel.recv_exit_status.return_value = rc fake_stdout.read.return_value = b'password="secret"' fake_stderr = mock.Mock() fake_stderr.read.return_value = b'password="foobar"' command = 'ls --password="bar"' connection = mock.Mock() connection.exec_command.return_value = (fake_stdin, fake_stdout, fake_stderr) if check and rc != -1 and rc != 0: err = self.assertRaises(processutils.ProcessExecutionError, processutils.ssh_execute, connection, command, check_exit_code=check) self.assertEqual(rc, err.exit_code) self.assertEqual('password="***"', err.stdout) self.assertEqual('password="***"', err.stderr) self.assertEqual('ls --password="***"', err.cmd) self.assertNotIn('secret', str(err)) self.assertNotIn('foobar', str(err)) # test ssh_execute with sanitize_stdout=False err = self.assertRaises(processutils.ProcessExecutionError, processutils.ssh_execute, connection, command, check_exit_code=check, sanitize_stdout=False) self.assertEqual(rc, err.exit_code) self.assertEqual('password="***"', err.stdout) self.assertEqual('password="***"', err.stderr) self.assertEqual('ls --password="***"', err.cmd) self.assertNotIn('secret', str(err)) self.assertNotIn('foobar', str(err)) else: o, e = processutils.ssh_execute(connection, command, check_exit_code=check) self.assertEqual('password="***"', o) self.assertEqual('password="***"', e) self.assertIn('password="***"', fixture.output) self.assertNotIn('bar', fixture.output) # test ssh_execute with sanitize_stdout=False o, e = processutils.ssh_execute(connection, command, check_exit_code=check, sanitize_stdout=False) self.assertEqual('password="secret"', o) self.assertEqual('password="***"', e) self.assertIn('password="***"', fixture.output) self.assertNotIn('bar', fixture.output) def test_compromising_ssh1(self): self._test_compromising_ssh(rc=-1, check=True) def test_compromising_ssh2(self): self._test_compromising_ssh(rc=0, check=True) def test_compromising_ssh3(self): self._test_compromising_ssh(rc=1, check=True) def test_compromising_ssh4(self): self._test_compromising_ssh(rc=1, check=False) def test_compromising_ssh5(self): self._test_compromising_ssh(rc=0, check=False) def test_compromising_ssh6(self): self._test_compromising_ssh(rc=-1, check=False) class PrlimitTestCase(test_base.BaseTestCase): # Simply program that does nothing and returns an exit code 0. # Use Python to be portable. SIMPLE_PROGRAM = [sys.executable, '-c', 'pass'] def soft_limit(self, res, substract, default_limit): # Create a new soft limit for a resource, lower than the current # soft limit. soft_limit, hard_limit = resource.getrlimit(res) if soft_limit <= 0: soft_limit = default_limit else: soft_limit -= substract return soft_limit def memory_limit(self, res): # Substract 1 kB just to get a different limit. Don't substract too # much to avoid memory allocation issues. # # Use 1 GB by default. Limit high enough to be able to load shared # libraries. Limit low enough to be work on 32-bit platforms. return self.soft_limit(res, 1024, 1024 ** 3) def limit_address_space(self): max_memory = self.memory_limit(resource.RLIMIT_AS) return processutils.ProcessLimits(address_space=max_memory) def test_simple(self): # Simple test running a program (/bin/true) with no parameter prlimit = self.limit_address_space() stdout, stderr = processutils.execute(*self.SIMPLE_PROGRAM, prlimit=prlimit) self.assertEqual('', stdout.rstrip()) self.assertEqual(stderr.rstrip(), '') def check_limit(self, prlimit, resource, value): code = ';'.join(('import resource', 'print(resource.getrlimit(resource.%s))' % resource)) args = [sys.executable, '-c', code] stdout, stderr = processutils.execute(*args, prlimit=prlimit) expected = (value, value) self.assertEqual(str(expected), stdout.rstrip()) def test_address_space(self): prlimit = self.limit_address_space() self.check_limit(prlimit, 'RLIMIT_AS', prlimit.address_space) def test_core_size(self): size = self.soft_limit(resource.RLIMIT_CORE, 1, 1024) prlimit = processutils.ProcessLimits(core_file_size=size) self.check_limit(prlimit, 'RLIMIT_CORE', prlimit.core_file_size) def test_cpu_time(self): time = self.soft_limit(resource.RLIMIT_CPU, 1, 1024) prlimit = processutils.ProcessLimits(cpu_time=time) self.check_limit(prlimit, 'RLIMIT_CPU', prlimit.cpu_time) def test_data_size(self): max_memory = self.memory_limit(resource.RLIMIT_DATA) prlimit = processutils.ProcessLimits(data_size=max_memory) self.check_limit(prlimit, 'RLIMIT_DATA', max_memory) def test_file_size(self): size = self.soft_limit(resource.RLIMIT_FSIZE, 1, 1024) prlimit = processutils.ProcessLimits(file_size=size) self.check_limit(prlimit, 'RLIMIT_FSIZE', prlimit.file_size) def test_memory_locked(self): max_memory = self.memory_limit(resource.RLIMIT_MEMLOCK) prlimit = processutils.ProcessLimits(memory_locked=max_memory) self.check_limit(prlimit, 'RLIMIT_MEMLOCK', max_memory) def test_resident_set_size(self): max_memory = self.memory_limit(resource.RLIMIT_RSS) prlimit = processutils.ProcessLimits(resident_set_size=max_memory) self.check_limit(prlimit, 'RLIMIT_RSS', max_memory) def test_number_files(self): nfiles = self.soft_limit(resource.RLIMIT_NOFILE, 1, 1024) prlimit = processutils.ProcessLimits(number_files=nfiles) self.check_limit(prlimit, 'RLIMIT_NOFILE', nfiles) def test_number_processes(self): nprocs = self.soft_limit(resource.RLIMIT_NPROC, 1, 65535) prlimit = processutils.ProcessLimits(number_processes=nprocs) self.check_limit(prlimit, 'RLIMIT_NPROC', nprocs) def test_stack_size(self): max_memory = self.memory_limit(resource.RLIMIT_STACK) prlimit = processutils.ProcessLimits(stack_size=max_memory) self.check_limit(prlimit, 'RLIMIT_STACK', max_memory) def test_unsupported_prlimit(self): self.assertRaises(ValueError, processutils.ProcessLimits, xxx=33) def test_relative_path(self): prlimit = self.limit_address_space() program = sys.executable env = dict(os.environ) env['PATH'] = os.path.dirname(program) args = [os.path.basename(program), '-c', 'pass'] processutils.execute(*args, prlimit=prlimit, env_variables=env) def test_execv_error(self): prlimit = self.limit_address_space() args = ['/missing_path/dont_exist/program'] try: processutils.execute(*args, prlimit=prlimit) except processutils.ProcessExecutionError as exc: self.assertEqual(1, exc.exit_code) self.assertEqual('', exc.stdout) expected = ('%s -m oslo_concurrency.prlimit: ' 'failed to execute /missing_path/dont_exist/program: ' % os.path.basename(sys.executable)) self.assertIn(expected, exc.stderr) else: self.fail("ProcessExecutionError not raised") def test_setrlimit_error(self): prlimit = self.limit_address_space() # trying to set a limit higher than the current hard limit # with setrlimit() should fail. higher_limit = prlimit.address_space + 1024 args = [sys.executable, '-m', 'oslo_concurrency.prlimit', '--as=%s' % higher_limit, '--'] args.extend(self.SIMPLE_PROGRAM) try: processutils.execute(*args, prlimit=prlimit) except processutils.ProcessExecutionError as exc: self.assertEqual(1, exc.exit_code) self.assertEqual('', exc.stdout) expected = ('%s -m oslo_concurrency.prlimit: ' 'failed to set the AS resource limit: ' % os.path.basename(sys.executable)) self.assertIn(expected, exc.stderr) else: self.fail("ProcessExecutionError not raised") @mock.patch.object(os, 'name', 'nt') @mock.patch.object(processutils.subprocess, "Popen") def test_prlimit_windows(self, mock_popen): # We want to ensure that process resource limits are # ignored on Windows, in which case this feature is not # supported. We'll just check the command passed to Popen, # which is expected to be unaltered. prlimit = self.limit_address_space() mock_popen.return_value.communicate.return_value = None processutils.execute( *self.SIMPLE_PROGRAM, prlimit=prlimit, check_exit_code=False) mock_popen.assert_called_once_with( self.SIMPLE_PROGRAM, stdin=mock.ANY, stdout=mock.ANY, stderr=mock.ANY, close_fds=mock.ANY, preexec_fn=mock.ANY, shell=mock.ANY, cwd=mock.ANY, env=mock.ANY) @mock.patch.object(processutils.subprocess, 'Popen') def test_python_exec(self, sub_mock): mock_subprocess = mock.MagicMock() mock_subprocess.communicate.return_value = (b'', b'') sub_mock.return_value = mock_subprocess args = ['/a/command'] prlimit = self.limit_address_space() processutils.execute(*args, prlimit=prlimit, check_exit_code=False, python_exec='/fake_path') python_path = sub_mock.mock_calls[0][1][0][0] self.assertEqual('/fake_path', python_path) oslo.concurrency-4.0.2/oslo_concurrency/tests/__init__.py0000664000175000017500000000130013643050652023703 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os if os.environ.get('TEST_EVENTLET'): import eventlet eventlet.monkey_patch() oslo.concurrency-4.0.2/oslo_concurrency/__init__.py0000664000175000017500000000000013643050652022535 0ustar zuulzuul00000000000000oslo.concurrency-4.0.2/oslo_concurrency/opts.py0000664000175000017500000000277313643050652022006 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_concurrency import lockutils __all__ = [ 'list_opts', ] def list_opts(): """Return a list of oslo.config options available in the library. The returned list includes all oslo.config options which may be registered at runtime by the library. Each element of the list is a tuple. The first element is the name of the group under which the list of elements in the second element will be registered. A group name of None corresponds to the [DEFAULT] group in config files. This function is also discoverable via the 'oslo_concurrency' entry point under the 'oslo.config.opts' namespace. The purpose of this is to allow tools like the Oslo sample config file generator to discover the options exposed to users by this library. :returns: a list of (group_name, opts) tuples """ return [('oslo_concurrency', copy.deepcopy(lockutils._opts))] oslo.concurrency-4.0.2/oslo_concurrency/watchdog.py0000664000175000017500000000453213643050652022614 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Watchdog module. .. versionadded:: 0.4 """ import contextlib import logging import threading from oslo_utils import timeutils @contextlib.contextmanager def watch(logger, action, level=logging.DEBUG, after=5.0): """Log a message if an operation exceeds a time threshold. This context manager is expected to be used when you are going to do an operation in code which might either deadlock or take an extraordinary amount of time, and you'd like to emit a status message back to the user that the operation is still ongoing but has not completed in an expected amount of time. This is more user friendly than logging 'start' and 'end' events and making users correlate the events to figure out they ended up in a deadlock. :param logger: an object that complies to the logger definition (has a .log method). :param action: a meaningful string that describes the thing you are about to do. :param level: the logging level the message should be emitted at. Defaults to logging.DEBUG. :param after: the duration in seconds before the message is emitted. Defaults to 5.0 seconds. Example usage:: FORMAT = '%(asctime)-15s %(message)s' logging.basicConfig(format=FORMAT) LOG = logging.getLogger('mylogger') with watchdog.watch(LOG, "subprocess call", logging.ERROR): subprocess.call("sleep 10", shell=True) print "done" """ watch = timeutils.StopWatch() watch.start() def log(): msg = "%s not completed after %0.3fs" % (action, watch.elapsed()) logger.log(level, msg) timer = threading.Timer(after, log) timer.start() try: yield finally: timer.cancel() timer.join() oslo.concurrency-4.0.2/CONTRIBUTING.rst0000664000175000017500000000104513643050652017511 0ustar zuulzuul00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps in this page: https://docs.openstack.org/infra/manual/developers.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: https://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/oslo.concurrency oslo.concurrency-4.0.2/doc/0000775000175000017500000000000013643050745015620 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/requirements.txt0000664000175000017500000000063613643050652021106 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. # this is required for the docs build jobs sphinx>=1.8.0,!=2.1.0 # BSD openstackdocstheme>=1.20.0 # Apache-2.0 reno>=2.5.0 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD sphinxcontrib-apidoc>=0.2.0 # BSD oslo.concurrency-4.0.2/doc/source/0000775000175000017500000000000013643050745017120 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/source/reference/0000775000175000017500000000000013643050745021056 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/source/reference/opts.rst0000664000175000017500000000026613643050652022576 0ustar zuulzuul00000000000000============================== :mod:`oslo_concurrency.opts` ============================== .. automodule:: oslo_concurrency.opts :members: :undoc-members: :show-inheritance: oslo.concurrency-4.0.2/doc/source/reference/processutils.rst0000664000175000017500000000032613643050652024345 0ustar zuulzuul00000000000000====================================== :mod:`oslo_concurrency.processutils` ====================================== .. automodule:: oslo_concurrency.processutils :members: :undoc-members: :show-inheritance: oslo.concurrency-4.0.2/doc/source/reference/index.rst0000664000175000017500000000024313643050652022713 0ustar zuulzuul00000000000000=============== API Reference =============== .. toctree:: :maxdepth: 1 fixture.lockutils lockutils opts processutils watchdog api/modules oslo.concurrency-4.0.2/doc/source/reference/lockutils.rst0000664000175000017500000000031213643050652023612 0ustar zuulzuul00000000000000=================================== :mod:`oslo_concurrency.lockutils` =================================== .. automodule:: oslo_concurrency.lockutils :members: :undoc-members: :show-inheritance: oslo.concurrency-4.0.2/doc/source/reference/fixture.lockutils.rst0000664000175000017500000000035213643050652025303 0ustar zuulzuul00000000000000=========================================== :mod:`oslo_concurrency.fixture.lockutils` =========================================== .. automodule:: oslo_concurrency.fixture.lockutils :members: :undoc-members: :show-inheritance: oslo.concurrency-4.0.2/doc/source/reference/watchdog.rst0000664000175000017500000000030613643050652023404 0ustar zuulzuul00000000000000================================== :mod:`oslo_concurrency.watchdog` ================================== .. automodule:: oslo_concurrency.watchdog :members: :undoc-members: :show-inheritance: oslo.concurrency-4.0.2/doc/source/conf.py0000775000175000017500000000351713643050652020425 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinxcontrib.apidoc', 'openstackdocstheme', 'oslo_config.sphinxext', ] # openstackdocstheme options repository_name = 'openstack/oslo.concurrency' bug_project = 'oslo.concurrency' bug_tag = '' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2014, OpenStack Foundation' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output ------------------------------------------------- html_theme = 'openstackdocs' # -- sphinxcontrib.apidoc configuration -------------------------------------- apidoc_module_dir = '../../' apidoc_output_dir = 'reference/api' apidoc_excluded_paths = [ 'oslo_concurrency/tests', 'oslo_concurrency/_*', 'setup.py', ] oslo.concurrency-4.0.2/doc/source/index.rst0000664000175000017500000000110713643050652020755 0ustar zuulzuul00000000000000============================================ Welcome to oslo.concurrency's documentation! ============================================ The `oslo`_ concurrency library has utilities for safely running multi-thread, multi-process applications using locking mechanisms and for running external processes. .. toctree:: :maxdepth: 1 install/index admin/index user/index configuration/index contributor/index reference/index Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` .. _oslo: https://wiki.openstack.org/wiki/Oslo oslo.concurrency-4.0.2/doc/source/install/0000775000175000017500000000000013643050745020566 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/source/install/index.rst0000664000175000017500000000033013643050652022420 0ustar zuulzuul00000000000000============ Installation ============ At the command line:: $ pip install oslo.concurrency Or, if you have virtualenvwrapper installed:: $ mkvirtualenv oslo.concurrency $ pip install oslo.concurrencyoslo.concurrency-4.0.2/doc/source/contributor/0000775000175000017500000000000013643050745021472 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/source/contributor/index.rst0000664000175000017500000000017313643050652023331 0ustar zuulzuul00000000000000===================== Contributor's Guide ===================== .. toctree:: :maxdepth: 2 contributing history oslo.concurrency-4.0.2/doc/source/contributor/history.rst0000664000175000017500000000004013643050652023714 0ustar zuulzuul00000000000000.. include:: ../../../ChangeLog oslo.concurrency-4.0.2/doc/source/contributor/contributing.rst0000664000175000017500000000012413643050652024725 0ustar zuulzuul00000000000000============== Contributing ============== .. include:: ../../../CONTRIBUTING.rst oslo.concurrency-4.0.2/doc/source/user/0000775000175000017500000000000013643050745020076 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/source/user/index.rst0000664000175000017500000000743713643050652021747 0ustar zuulzuul00000000000000======= Usage ======= To use oslo.concurrency in a project, import the relevant module. For example:: from oslo_concurrency import lockutils from oslo_concurrency import processutils .. seealso:: * :doc:`API Documentation <../reference/index>` Locking a function (local to a process) ======================================= To ensure that a function (which is not thread safe) is only used in a thread safe manner (typically such type of function should be refactored to avoid this problem but if not then the following can help):: @lockutils.synchronized('not_thread_safe') def not_thread_safe(): pass Once decorated later callers of this function will be able to call into this method and the contract that two threads will **not** enter this function at the same time will be upheld. Make sure that the names of the locks used are carefully chosen (typically by namespacing them to your app so that other apps will not chose the same names). Locking a function (local to a process as well as across process) ================================================================= To ensure that a function (which is not thread safe **or** multi-process safe) is only used in a safe manner (typically such type of function should be refactored to avoid this problem but if not then the following can help):: @lockutils.synchronized('not_thread_process_safe', external=True) def not_thread_process_safe(): pass Once decorated later callers of this function will be able to call into this method and the contract that two threads (or any two processes) will **not** enter this function at the same time will be upheld. Make sure that the names of the locks used are carefully chosen (typically by namespacing them to your app so that other apps will not chose the same names). Enabling fair locking ===================== By default there is no requirement that the lock is ``fair``. That is, it's possible for a thread to block waiting for the lock, then have another thread block waiting for the lock, and when the lock is released by the current owner the second waiter could acquire the lock before the first. In an extreme case you could have a whole string of other threads acquire the lock before the first waiter acquires it, resulting in unpredictable amounts of latency. For cases where this is a problem, it's possible to specify the use of fair locks:: @lockutils.synchronized('not_thread_process_safe', fair=True) def not_thread_process_safe(): pass When using fair locks the lock itself is slightly more expensive (which shouldn't matter in most cases), but it will ensure that all threads that block waiting for the lock will acquire it in the order that they blocked. The exception to this is when specifying both ``external`` and ``fair`` locks. In this case, the ordering *within* a given process will be fair, but the ordering *between* processes will be determined by the behaviour of the underlying OS. Common ways to prefix/namespace the synchronized decorator ========================================================== Since it is **highly** recommended to prefix (or namespace) the usage of the synchronized there are a few helpers that can make this much easier to achieve. An example is:: myapp_synchronized = lockutils.synchronized_with_prefix("myapp") Then further usage of the ``lockutils.synchronized`` would instead now use this decorator created above instead of using ``lockutils.synchronized`` directly. Command Line Wrapper ==================== ``oslo.concurrency`` includes a command line tool for use in test jobs that need the environment variable :envvar:`OSLO_LOCK_PATH` set. To use it, prefix the command to be run with :command:`lockutils-wrapper`. For example:: $ lockutils-wrapper env | grep OSLO_LOCK_PATH OSLO_LOCK_PATH=/tmp/tmpbFHK45 oslo.concurrency-4.0.2/doc/source/configuration/0000775000175000017500000000000013643050745021767 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/source/configuration/index.rst0000664000175000017500000000040013643050652023617 0ustar zuulzuul00000000000000======================= Configuration Options ======================= oslo.concurrency uses oslo.config to define and manage configuration options to allow the deployer to control how an application uses this library. .. show-options:: oslo.concurrency oslo.concurrency-4.0.2/doc/source/admin/0000775000175000017500000000000013643050745020210 5ustar zuulzuul00000000000000oslo.concurrency-4.0.2/doc/source/admin/index.rst0000664000175000017500000001151313643050652022047 0ustar zuulzuul00000000000000===================== Administrator Guide ===================== This section contains information useful to administrators operating a service that uses oslo.concurrency. Lock File Management ==================== For services that use oslo.concurrency's external lock functionality for interprocess locking, lock files will be stored in the location specified by the ``lock_path`` config option in the ``oslo_concurrency`` section. These lock files are not automatically deleted by oslo.concurrency because the library has no way to know when the service is done with a lock, and deleting a lock file that is being held by the service would cause concurrency problems. Some services do delete lock files when they are done with them, but deletion of a service's lock files while the service is running should only be done by the service itself. External cleanup methods cannot reasonably know when a lock is no longer needed. However, to prevent the ``lock_path`` directory from growing indefinitely, it is a good idea to occasionally delete all the lock files from it. The only safe time to do this is when the service is not running, such as after a reboot or when the service is down for maintenance. In the latter case, make sure that all related services (such as api, worker, conductor, etc) are down If any process that might hold locks is still running, deleting lock files may introduce inconsistency in the service. One possible approach to this cleanup is to put the ``lock_path`` in tmpfs so it will be automatically cleared on reboot. Note that in general, leftover lock files are a cosmetic nuisance at worst. If you do run into a functional problem as a result of large numbers of lock files, please report it to the Oslo team so we can look into other mitigation strategies. Frequently Asked Questions ========================== What is the history of the lock file issue? ------------------------------------------- It comes up every few months when a deployer of OpenStack notices that they have a large number of lock files lying around, apparently unused. A thread is started on the mailing list and one of the Oslo developers has to provide an explanation of why it works the way it does. This FAQ is intended to be an official replacement for the one-off explanation that is usually given. The code responsible for this behavior has actually moved to the `fasteners `_ project, and there is an `issue addressing the leftover lock files `_ there. It covers much of the technical history of the problem, as well as some proposed solutions. Why hasn't this been fixed yet? ------------------------------- Because to the Oslo developers' knowledge, no one has ever had a functional issue as a result of leftover lock files. This makes it a lower priority problem, and because of the complexity of fixing it nobody has been able to yet. If functional issues are found, they should be reported as a bug against oslo.concurrency so they can be tracked. In the meantime, this will likely continue to be treated as a cosmetic annoyance and prioritized appropriately. Why aren't lock files deleted when the lock is released? -------------------------------------------------------- In our testing, when a lock file was deleted while another process was waiting for it, it created a sort of split-brain situation between any process that had been waiting for the deleted file, and any process that attempted to lock the file after it had been deleted. Essentially, two processes could end up holding the same lock at the same time, which made this an unacceptable solution. Why don't you use some other method of interprocess locking? ------------------------------------------------------------ We tried. Both Posix and SysV IPC were explored as alternatives. Unfortunately, both have significant issues on Linux. Posix locks cannot be broken if the process holding them crashes (at least not without a reboot). SysV locks have a limited range of numerical ids, and because oslo.concurrency supports string-based lock names, the possibility of collisions when hashing names was too high. It was deemed better to have the file-based locking mechanism that would always work than a different method that introduced serious new problems. Bonus Question: Why doesn't ``lock_path`` default to a temp directory? ---------------------------------------------------------------------- Because every process that may need to hold a lock must use the same value for ``lock_path`` or it becomes useless. If we allowed ``lock_path`` to be unset and just created a temp directory on startup, each process would create its own temp directory and there would be no actual coordination between them. While this isn't strictly related to the lock file issue, it is another FAQ about oslo.concurrency so it made sense to mention it here.