barbican-6.0.0/0000775000175100017510000000000013245511177013310 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/0000775000175100017510000000000013245511177016001 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/notes/0000775000175100017510000000000013245511177017131 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/notes/metadata-api-e95d4559e7bf9ca9.yaml0000666000175100017510000000046713245511001024664 0ustar zuulzuul00000000000000--- prelude: > The Mitaka release includes a new API to add arbitrary user-defined metadata to Secrets. upgrade: - The Metadata API requires an update to the Database Schema. Existing deployments that are being upgraded to Mitaka should use the 'barbican-manage' utility to update the schema. barbican-6.0.0/releasenotes/notes/oslopolicy-genscripts-1a7b364b8ffd7c3f.yaml0000666000175100017510000000050513245511001026736 0ustar zuulzuul00000000000000--- features: - Maintain the policy rules in code and add an oslo.policy CLI script in tox to generate policy sample file. The script can be called like "oslopolicy-sample-generator --config-file=etc/oslo-config-generator/policy.conf" and will generate a policy.yaml.sample file with the effective policy. barbican-6.0.0/releasenotes/notes/multiple-backends-75f5b85c63b930b7.yaml0000666000175100017510000000211313245511001025547 0ustar zuulzuul00000000000000--- prelude: > Now within a single deployment, multiple secret store plugin backends can be configured and used. With this change, a project administrator can pre-define a preferred plugin backend for storing their secrets. New APIs are added to manage this project level secret store preference. features: - New feature to support multiple secret store plugin backends. This feature is not enabled by default. To use this feature, the relevant feature flag needs to be enabled and supporting configuration needs to be added in the service configuration. Once enabled, a project administrator will be able to specify one of the available secret store backends as a preferred secret store for their project secrets. This secret store preference applies only to new secrets (key material) created or stored within that project. Existing secrets are not impacted. See http://docs.openstack.org/developer/barbican/setup/plugin_backends.html for instructions on how to setup Barbican multiple backends, and the API documentation for further details. barbican-6.0.0/releasenotes/notes/barbican-manage-d469b4d15454f981.yaml0000666000175100017510000000121213245511001025055 0ustar zuulzuul00000000000000--- prelude: > This release includes a new command line utility 'barbican-manage' that consolidates and supersedes the separate HSM and database management scripts. features: - The 'barbican-manage' tool can be used to manage database schema changes as well as provision and rotate keys in the HSM backend. deprecations: - The 'barbican-db-manage' script is deprecated. Use the new 'barbican-manage' utility instead. - The 'pkcs11-kek-rewrap' script is deprecated. Use the new 'barbican-manage' utility instead. - The 'pkcs11-key-generation' script is deprecated. Use the new 'barbican-manage' utility instead. barbican-6.0.0/releasenotes/notes/remove_pkix-b045e7dde7e47356.yaml0000666000175100017510000000021013245511001024550 0ustar zuulzuul00000000000000--- deprecations: - | Removed application/pkix media type because Barbican will not be using media types for format conversion.barbican-6.0.0/releasenotes/notes/http_proxy_to_wsgi-middleware-98dc4fe03eb362d3.yaml0000666000175100017510000000101113245511001030360 0ustar zuulzuul00000000000000--- prelude: > This release adds http_proxy_to_wsgi middleware to the pipeline. features: - The 'http_proxy_to_wsgi' middleware can be used to help barbican respond with the correct URL refs when it's put behind a TLS proxy (such as HAProxy). This middleware is disabled by default, but can be enabled via a configuration option in the oslo_middleware group. upgrade: - The barbican-api-paste.ini configuration file for the paste pipeline was updated to add the http_proxy_to_wsgi middleware. barbican-6.0.0/releasenotes/notes/use_oslo_config_generator-f2a9be9e71d90b1f.yaml0000666000175100017510000000013513245511001027605 0ustar zuulzuul00000000000000--- other: - oslo-config-generator is now used to generate a barbican.conf.sample file barbican-6.0.0/releasenotes/notes/pkcs11-backend-performance-f3caacbe9e1ab535.yaml0000666000175100017510000000211713245511001027474 0ustar zuulzuul00000000000000--- prelude: > This release includes significant improvements to the performance of the PKCS#11 Cryptographic Plugin driver. These changes will require a data migration of any existing data stored by previous versions of the PKCS#11 backend. issues: - > The service will encounter errors if you attempt to run this new release using data stored by a previous version of the PKCS#11 Cryptographic Plugin that has not yet been migrated for this release. The logged errors will look like ``'P11CryptoPluginException: HSM returned response code: 0xc0L CKR_SIGNATURE_INVALID'`` upgrade: - > If you are upgrading from previous version of barbican that uses the PKCS#11 Cryptographic Plugin driver, you will need to run the migration script ``python barbican/cmd/pkcs11_migrate_kek_signatures.py`` critical: - > If you are upgrading from previous version of barbican that uses the PKCS#11 Cryptographic Plugin driver, you will need to run the migration script ``python barbican/cmd/pkcs11_migrate_kek_signatures.py`` barbican-6.0.0/releasenotes/notes/removing-cas-certificate-orders-96fc47a7acaea273.yaml0000666000175100017510000000363613245511001030536 0ustar zuulzuul00000000000000--- prelude: > This release notify that we will remove Certificate Orders and CAs from API. deprecations: - Certificate Orders - CAs other: - Why are we deprecating Certificate Issuance? There are a few reasons that were considered for this decision. First, there does not seem to be a lot of interest in the community to fully develop the Certificate Authority integration with Barbican. We have a few outstanding blueprints that are needed to make Certificate Issuance fully functional, but so far no one has committed to getting the work done. Additionally, we've had very little buy-in from public Certificate Authorities. Both Symantec and Digicert were interested in integration in the past, but that interest didn't materialize into robust CA plugins like we hoped it would. Secondly, there have been new developments in the space of Certificate Authorities since we started Barbican. The most significant of these was the launch of the Let's Encrypt public CA along with the definition of the ACME protocol for certificate issuance. We believe that future certificate authority services would do good to implement the ACME standard, which is quite different than the API the Barbican team had developed. Lastly, deprecating Certificate Issuance within Barbican will simplify both the architecture and deployment of Barbican. This will allow us to focus on the features that Barbican does well -- the secure storage of secret material. - Will Barbican still be able to store Certificates? Yes, absolutely! The only thing we're deprecating is the plugin interface that talks to Certificate Authorities and associated APIs. While you will not be able to use Barbican to issue a new certificate, you will always be able to securely store any certificates in Barbican, including those issued by public CAs or internal CAs. barbican-6.0.0/releasenotes/notes/.placeholder0000666000175100017510000000000013245511001021366 0ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/0000775000175100017510000000000013245511177017301 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/newton.rst0000666000175100017510000000023213245511001021326 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton barbican-6.0.0/releasenotes/source/_static/0000775000175100017510000000000013245511177020727 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/_static/.placeholder0000666000175100017510000000000013245511001023164 0ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/liberty.rst0000666000175100017510000000022213245511001021465 0ustar zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/liberty barbican-6.0.0/releasenotes/source/pike.rst0000666000175100017510000000021713245511001020747 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike barbican-6.0.0/releasenotes/source/conf.py0000666000175100017510000002161013245511001020564 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Barbican Release Notes documentation build configuration file, created by # sphinx-quickstart on Mon Nov 30 10:43:57 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Barbican Release Notes' copyright = u'2015, Barbican Developers' repository_name = 'openstack/barbican' bug_project = 'barbican' bug_tag = '' # Must set this variable to include year, month, day, hours, and minutes. html_last_updated_fmt = '%Y-%m-%d %H:%M' # Release notes are version independent. # The short X.Y version. version = '' # The full version, including alpha/beta/rc tags. release = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'BarbicanReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'BarbicanReleaseNotes.tex', u'Barbican Release Notes Documentation', u'Barbican Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'barbicanreleasenotes', u'Barbican Release Notes Documentation', [u'Barbican Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'BarbicanReleaseNotes', u'Barbican Release Notes Documentation', u'Barbican Developers', 'BarbicanReleaseNotes', 'Barbican Release Notes Documentation.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] barbican-6.0.0/releasenotes/source/locale/0000775000175100017510000000000013245511177020540 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/locale/zh_CN/0000775000175100017510000000000013245511177021541 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/locale/zh_CN/LC_MESSAGES/0000775000175100017510000000000013245511177023326 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/locale/zh_CN/LC_MESSAGES/releasenotes.po0000666000175100017510000001046613245511001026352 0ustar zuulzuul00000000000000# Jeremy Liu , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: Barbican Release Notes 5.0.0\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-08-11 03:08+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-07-12 01:44+0000\n" "Last-Translator: Jeremy Liu \n" "Language-Team: Chinese (China)\n" "Language: zh-CN\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=1; plural=0\n" msgid "2.0.0" msgstr "2.0.0" msgid "Barbican Release Notes" msgstr "Barbican发布说明" msgid "Contents:" msgstr "内容" msgid "Critical Issues" msgstr "严重的问题" msgid "Current Series Release Notes" msgstr "当前版本发布说明" msgid "Deprecation Notes" msgstr "弃用说明" msgid "" "If you are upgrading from previous version of barbican that uses the PKCS#11 " "Cryptographic Plugin driver, you will need to run the migration script" msgstr "" "如果你从先前使用PKCS#11密码插件作为驱动的barbican升级,你就必须执行迁移脚本。" msgid "Known Issues" msgstr "已知的问题" msgid "Liberty Series Release Notes" msgstr "Liberty版本发布说明" msgid "Mitaka Series Release Notes" msgstr "Mitaka版本发布说明" msgid "New Features" msgstr "新特性" msgid "Other Notes" msgstr "其他说明" msgid "Start using reno to manage release notes." msgstr "开始使用reno管理发布说明。" msgid "" "The 'barbican-db-manage' script is deprecated. Use the new 'barbican-" "manage' utility instead." msgstr "" "'barbican-db-manage'脚本已被弃用,使用新的通用的'barbican-manage'进行替代。" msgid "" "The 'barbican-manage' tool can be used to manage database schema changes as " "well as provision and rotate keys in the HSM backend." msgstr "" "'barbican-manage'工具能用来管理数据库条目的改变以及在HSM后端提供和循环密钥。" msgid "" "The 'pkcs11-kek-rewrap' script is deprecated. Use the new 'barbican-manage' " "utility instead." msgstr "" "'pkcs11-kek-rewrap'脚本已被弃用,使用新的通用的'barbican-manage'进行替代。" msgid "" "The 'pkcs11-key-generation' script is deprecated. Use the new 'barbican-" "manage' utility instead." msgstr "" "'pkcs11-key-generation'脚本已被弃用,使用新的通用的'barbican-manage'进行替" "代。" msgid "" "The Metadata API requires an update to the Database Schema. Existing " "deployments that are being upgraded to Mitaka should use the 'barbican-" "manage' utility to update the schema." msgstr "" "元数据API需要更新数据库条目。现有的升级到Mitaka版本的部署应该使用通用" "的'barbican-manage'升级数据库条目。" msgid "" "The Mitaka release includes a new API to add arbitrary user-defined metadata " "to Secrets." msgstr "Mitaka版本包含一个给秘密添加任意用户定义的元数据的API。" msgid "" "The service will encounter errors if you attempt to run this new release " "using data stored by a previous version of the PKCS#11 Cryptographic Plugin " "that has not yet been migrated for this release. The logged errors will " "look like" msgstr "" "如果你尝试使用还未移植的先前版本的PKCS#11密码插件存储的数据运行新版本,服务会" "出错。错误日志看起来像" msgid "" "This release includes a new command line utility 'barbican-manage' that " "consolidates and supersedes the separate HSM and database management scripts." msgstr "这个版本包括一个增强和替换单独的HSM和数据库管理脚本的通用命令行。" msgid "" "This release includes significant improvements to the performance of the " "PKCS#11 Cryptographic Plugin driver. These changes will require a data " "migration of any existing data stored by previous versions of the PKCS#11 " "backend." msgstr "" "这个版本包含对于PKCS#11密码插件驱动的重要改进。这些改进需要移植以前版本的" "PKCS#11后端存储的数据。" msgid "Upgrade Notes" msgstr "升级说明" msgid "" "``'P11CryptoPluginException: HSM returned response code: 0xc0L " "CKR_SIGNATURE_INVALID'``" msgstr "" "``'P11CryptoPluginException: HSM返回响应码:0xc0L CKR_SIGNATURE_INVALID'``" msgid "``python barbican/cmd/pkcs11_migrate_kek_signatures.py``" msgstr "``python barbican/cmd/pkcs11_migrate_kek_signatures.py``" barbican-6.0.0/releasenotes/source/locale/en_GB/0000775000175100017510000000000013245511177021512 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/0000775000175100017510000000000013245511177023277 5ustar zuulzuul00000000000000barbican-6.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po0000666000175100017510000003176613245511010026331 0ustar zuulzuul00000000000000# Andi Chandler , 2017. #zanata # Andi Chandler , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: Barbican Release Notes\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2018-02-09 17:50+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-01-30 02:34+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en-GB\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "1.0.0-5" msgstr "1.0.0-5" msgid "2.0.0" msgstr "2.0.0" msgid "3.0.0" msgstr "3.0.0" msgid "4.0.0" msgstr "4.0.0" msgid "5.0.0" msgstr "5.0.0" msgid "6.0.0.0b3" msgstr "6.0.0.0b3" msgid "Barbican Release Notes" msgstr "Barbican Release Notes" msgid "CAs" msgstr "CAs" msgid "Certificate Orders" msgstr "Certificate Orders" msgid "Contents:" msgstr "Contents:" msgid "Critical Issues" msgstr "Critical Issues" msgid "Current Series Release Notes" msgstr "Current Series Release Notes" msgid "Deprecation Notes" msgstr "Deprecation Notes" msgid "" "If you are upgrading from previous version of barbican that uses the PKCS#11 " "Cryptographic Plugin driver, you will need to run the migration script" msgstr "" "If you are upgrading from previous version of barbican that uses the PKCS#11 " "Cryptographic Plugin driver, you will need to run the migration script" msgid "Known Issues" msgstr "Known Issues" msgid "Liberty Series Release Notes" msgstr "Liberty Series Release Notes" msgid "" "Maintain the policy rules in code and add an oslo.policy CLI script in tox " "to generate policy sample file. The script can be called like \"oslopolicy-" "sample-generator --config-file=etc/oslo-config-generator/policy.conf\" and " "will generate a policy.yaml.sample file with the effective policy." msgstr "" "Maintain the policy rules in code and add an oslo.policy CLI script in tox " "to generate policy sample file. The script can be called like \"oslopolicy-" "sample-generator --config-file=etc/oslo-config-generator/policy.conf\" and " "will generate a policy.yaml.sample file with the effective policy." msgid "Mitaka Series Release Notes" msgstr "Mitaka Series Release Notes" msgid "New Features" msgstr "New Features" msgid "" "New feature to support multiple secret store plugin backends. This feature " "is not enabled by default. To use this feature, the relevant feature flag " "needs to be enabled and supporting configuration needs to be added in the " "service configuration. Once enabled, a project adminstrator will be able to " "specify one of the available secret store backends as a preferred secret " "store for their project secrets. This secret store preference applies only " "to new secrets (key material) created or stored within that project. " "Existing secrets are not impacted. See http://docs.openstack.org/developer/" "barbican/setup/plugin_backends.html for instructions on how to setup " "Barbican multiple backends, and the API documentation for further details." msgstr "" "New feature to support multiple secret store plugin backends. This feature " "is not enabled by default. To use this feature, the relevant feature flag " "needs to be enabled and supporting configuration needs to be added in the " "service configuration. Once enabled, a project administrator will be able to " "specify one of the available secret store backends as a preferred secret " "store for their project secrets. This secret store preference applies only " "to new secrets (key material) created or stored within that project. " "Existing secrets are not impacted. See http://docs.openstack.org/developer/" "barbican/setup/plugin_backends.html for instructions on how to setup " "Barbican multiple backends, and the API documentation for further details." msgid "Newton Series Release Notes" msgstr "Newton Series Release Notes" msgid "" "Now within a single deployment, multiple secret store plugin backends can be " "configured and used. With this change, a project adminstrator can pre-define " "a preferred plugin backend for storing their secrets. New APIs are added to " "manage this project level secret store preference." msgstr "" "Now within a single deployment, multiple secret store plugin backends can be " "configured and used. With this change, a project administrator can pre-" "define a preferred plugin backend for storing their secrets. New APIs are " "added to manage this project level secret store preference." msgid "Ocata Series Release Notes" msgstr "Ocata Series Release Notes" msgid "Other Notes" msgstr "Other Notes" msgid "Pike Series Release Notes" msgstr "Pike Series Release Notes" msgid "Prelude" msgstr "Prelude" msgid "" "Removed application/pkix media type because Barbican will not be using media " "types for format conversion." msgstr "" "Removed application/pkix media type because Barbican will not be using media " "types for format conversion." msgid "Start using reno to manage release notes." msgstr "Start using reno to manage release notes." msgid "" "The 'barbican-db-manage' script is deprecated. Use the new 'barbican-" "manage' utility instead." msgstr "" "The 'barbican-db-manage' script is deprecated. Use the new 'barbican-" "manage' utility instead." msgid "" "The 'barbican-manage' tool can be used to manage database schema changes as " "well as provision and rotate keys in the HSM backend." msgstr "" "The 'barbican-manage' tool can be used to manage database schema changes as " "well as provision and rotate keys in the HSM backend." msgid "" "The 'http_proxy_to_wsgi' middleware can be used to help barbican respond " "with the correct URL refs when it's put behind a TLS proxy (such as " "HAProxy). This middleware is disabled by default, but can be enabled via a " "configuration option in the oslo_middleware group." msgstr "" "The 'http_proxy_to_wsgi' middleware can be used to help Barbican respond " "with the correct URL refs when it's put behind a TLS proxy (such as " "HAProxy). This middleware is disabled by default, but can be enabled via a " "configuration option in the oslo_middleware group." msgid "" "The 'pkcs11-kek-rewrap' script is deprecated. Use the new 'barbican-manage' " "utility instead." msgstr "" "The 'pkcs11-kek-rewrap' script is deprecated. Use the new 'barbican-manage' " "utility instead." msgid "" "The 'pkcs11-key-generation' script is deprecated. Use the new 'barbican-" "manage' utility instead." msgstr "" "The 'pkcs11-key-generation' script is deprecated. Use the new 'barbican-" "manage' utility instead." msgid "" "The Metadata API requires an update to the Database Schema. Existing " "deployments that are being upgraded to Mitaka should use the 'barbican-" "manage' utility to update the schema." msgstr "" "The Metadata API requires an update to the Database Schema. Existing " "deployments that are being upgraded to Mitaka should use the 'barbican-" "manage' utility to update the schema." msgid "" "The Mitaka release includes a new API to add arbitrary user-defined metadata " "to Secrets." msgstr "" "The Mitaka release includes a new API to add arbitrary user-defined metadata " "to Secrets." msgid "" "The barbican-api-paste.ini configuration file for the paste pipeline was " "updated to add the http_proxy_to_wsgi middleware." msgstr "" "The barbican-api-paste.ini configuration file for the paste pipeline was " "updated to add the http_proxy_to_wsgi middleware." msgid "" "The service will encounter errors if you attempt to run this new release " "using data stored by a previous version of the PKCS#11 Cryptographic Plugin " "that has not yet been migrated for this release. The logged errors will " "look like" msgstr "" "The service will encounter errors if you attempt to run this new release " "using data stored by a previous version of the PKCS#11 Cryptographic Plugin " "that has not yet been migrated for this release. The logged errors will " "look like" msgid "This release adds http_proxy_to_wsgi middleware to the pipeline." msgstr "This release adds http_proxy_to_wsgi middleware to the pipeline." msgid "" "This release includes a new command line utility 'barbican-manage' that " "consolidates and supersedes the separate HSM and database management scripts." msgstr "" "This release includes a new command line utility 'barbican-manage' that " "consolidates and supersedes the separate HSM and database management scripts." msgid "" "This release includes significant improvements to the performance of the " "PKCS#11 Cryptographic Plugin driver. These changes will require a data " "migration of any existing data stored by previous versions of the PKCS#11 " "backend." msgstr "" "This release includes significant improvements to the performance of the " "PKCS#11 Cryptographic Plugin driver. These changes will require a data " "migration of any existing data stored by previous versions of the PKCS#11 " "backend." msgid "" "This release notify that we will remove Certificate Orders and CAs from API." msgstr "" "This release notify that we will remove Certificate Orders and CAs from the " "API." msgid "Upgrade Notes" msgstr "Upgrade Notes" msgid "" "Why are we deprecating Certificate Issuance? There are a few reasons that " "were considered for this decision. First, there does not seem to be a lot " "of interest in the community to fully develop the Certificate Authority " "integration with Barbican. We have a few outstanding blueprints that are " "needed to make Certificate Issuance fully functional, but so far no one has " "committed to getting the work done. Additionally, we've had very little buy-" "in from public Certificate Authorities. Both Symantec and Digicert were " "interested in integration in the past, but that interest didn't materialize " "into robust CA plugins like we hoped it would. Secondly, there have been new " "developments in the space of Certificate Authorities since we started " "Barbican. The most significant of these was the launch of the Let's Encrypt " "public CA along with the definition of the ACME protocol for certificate " "issuance. We believe that future certificate authority services would do " "good to implement the ACME standard, which is quite different than the API " "the Barbican team had developed. Lastly, deprecating Certificate Issuance " "within Barbican will simplify both the architecture and deployment of " "Barbican. This will allow us to focus on the features that Barbican does " "well -- the secure storage of secret material." msgstr "" "Why are we deprecating Certificate Issuance? There are a few reasons that " "were considered for this decision. First, there does not seem to be a lot " "of interest in the community to fully develop the Certificate Authority " "integration with Barbican. We have a few outstanding blueprints that are " "needed to make Certificate Issuance fully functional, but so far no one has " "committed to getting the work done. Additionally, we've had very little buy-" "in from public Certificate Authorities. Both Symantec and Digicert were " "interested in integration in the past, but that interest didn't materialise " "into robust CA plugins like we hoped it would. Secondly, there have been new " "developments in the space of Certificate Authorities since we started " "Barbican. The most significant of these was the launch of the Let's Encrypt " "public CA along with the definition of the ACME protocol for certificate " "issuance. We believe that future certificate authority services would do " "good to implement the ACME standard, which is quite different than the API " "the Barbican team had developed. Lastly, deprecating Certificate Issuance " "within Barbican will simplify both the architecture and deployment of " "Barbican. This will allow us to focus on the features that Barbican does " "well -- the secure storage of secret material." msgid "" "Will Barbican still be able to store Certificates? Yes, absolutely! The " "only thing we're deprecating is the plugin interface that talks to " "Certificate Authorites and associated APIs. While you will not be able to " "use Barbican to issue a new certificate, you will always be able to securely " "store any certificates in Barbican, including those issued by public CAs or " "internal CAs." msgstr "" "Will Barbican still be able to store Certificates? Yes, absolutely! The " "only thing we're deprecating is the plugin interface that talks to " "Certificate Authorities and associated APIs. While you will not be able to " "use Barbican to issue a new certificate, you will always be able to securely " "store any certificates in Barbican, including those issued by public CAs or " "internal CAs." msgid "" "``'P11CryptoPluginException: HSM returned response code: 0xc0L " "CKR_SIGNATURE_INVALID'``" msgstr "" "``'P11CryptoPluginException: HSM returned response code: 0xc0L " "CKR_SIGNATURE_INVALID'``" msgid "``python barbican/cmd/pkcs11_migrate_kek_signatures.py``" msgstr "``python barbican/cmd/pkcs11_migrate_kek_signatures.py``" msgid "" "oslo-config-generator is now used to generate a barbican.conf.sample file" msgstr "" "oslo-config-generator is now used to generate a barbican.conf.sample file" barbican-6.0.0/releasenotes/source/unreleased.rst0000666000175100017510000000016013245511001022143 0ustar zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: barbican-6.0.0/releasenotes/source/index.rst0000666000175100017510000000025513245511001021130 0ustar zuulzuul00000000000000====================== Barbican Release Notes ====================== Contents: .. toctree:: :maxdepth: 1 unreleased pike ocata newton mitaka liberty barbican-6.0.0/releasenotes/source/mitaka.rst0000666000175100017510000000023213245511001021262 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka barbican-6.0.0/releasenotes/source/ocata.rst0000666000175100017510000000023013245511001021101 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata barbican-6.0.0/doc/0000775000175100017510000000000013245511177014055 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/0000775000175100017510000000000013245511177015355 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/api/0000775000175100017510000000000013245511177016126 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/api/reference/0000775000175100017510000000000013245511177020064 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/api/reference/acls.rst0000666000175100017510000005737213245511001021542 0ustar zuulzuul00000000000000******************* ACL API - Reference ******************* .. note:: This feature is applicable only when Barbican is used in an authenticated pipeline i.e. integrated with Keystone. .. note:: Currently the access control list (ACL) settings defined for a container are not propagated down to associated secrets. .. warning:: This ACL documentation is work in progress and may change in near future. Secret ACL API =============== .. _get_secret_acl: GET /v1/secrets/{uuid}/acl ########################## Retrieve the ACL settings for a given secret. If no ACL is defined for that secret, then `Default ACL `__ is returned. Request/Response (With ACL defined): ************************************ .. code-block:: javascript Request: GET /v1/secrets/{uuid}/acl Headers: X-Auth-Token: {token_id} Response: HTTP/1.1 200 OK { "read":{ "updated":"2015-05-12T20:08:47.644264", "created":"2015-05-12T19:23:44.019168", "users":[ {user_id1}, {user_id2}, ..... ], "project-access":{project-access-flag} } } Request/Response (With no ACL defined): *************************************** .. code-block:: javascript Request: GET /v1/secrets/{uuid}/acl Headers: X-Auth-Token: {token_id} Response: HTTP/1.1 200 OK { "read":{ "project-access": true } } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful request. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Secret not found for the given UUID. | +------+-----------------------------------------------------------------------------+ .. _put_secret_acl: PUT /v1/secrets/{uuid}/acl ########################## Create new or replaces existing ACL for a given secret. This call is used to add new ACL for a secret. If the ACL is already set on a secret, this method will replace it with the requested ACL settings. In case of create (first new explicit ACL) or replace existing ACL, 200 is returned in both cases. To delete existing users from an ACL definition, pass empty list [] for `users`. Returns an ACL reference in success case. Attributes ********** The ACL resource detailed in this page allows access to individual secrets to be controlled. This access is configured via operations on those secrets. Currently only the 'read' operation (which includes GET REST actions) is supported. +----------------------------+----------+-----------------------------------------------+----------+ | Attribute Name | Type | Description | Default | +============================+==========+===============================================+==========+ | read | parent | ACL data for read operation. | None | | | element | | | +----------------------------+----------+-----------------------------------------------+----------+ | users | [string] | (optional) List of user ids. This needs to be | [] | | | | a user id as returned by Keystone. | | +----------------------------+----------+-----------------------------------------------+----------+ | project-access | boolean | (optional) Flag to mark a secret private so | `true` | | | | that the user who created the secret and | | | | | ``users`` specified in above list can only | | | | | access the secret. Pass `false` to mark the | | | | | secret private. | | +----------------------------+----------+-----------------------------------------------+----------+ Request/Response (Set or Replace ACL): ************************************** .. code-block:: javascript Request: PUT /v1/secrets/{uuid}/acl Headers: Content-Type: application/json X-Auth-Token: {token_id} Body: { "read":{ "users":[ {user_id1}, {user_id2}, ..... ], "project-access":{project-access-flag} } } Response: HTTP/1.1 200 OK {"acl_ref": "https://{barbican_host}/v1/secrets/{uuid}/acl"} HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successfully set/replaced secret ACL. | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Secret not found for the given UUID. | +------+-----------------------------------------------------------------------------+ | 415 | Unsupported Media Type. | +------+-----------------------------------------------------------------------------+ .. _patch_secret_acl: PATCH /v1/secrets/{uuid}/acl ############################ Updates existing ACL for a given secret. This method can be used to apply partial changes on existing ACL settings. Client can update the `users` list and enable or disable `project-access` flag for existing ACL. List of provided users replaces existing users if any. For an existing list of provided users from an ACL definition, pass empty list [] for `users`. Returns an ACL reference in success case. .. note:: PATCH API support will be changing in near future. Attributes ********** +----------------------------+----------+-----------------------------------------------+----------+ | Attribute Name | Type | Description | Default | +============================+==========+===============================================+==========+ | read | parent | ACL data for read operation. | None | | | element | | | +----------------------------+----------+-----------------------------------------------+----------+ | users | [string] | (optional) List of user ids. This needs to be | None | | | | a user id as returned by Keystone. | | +----------------------------+----------+-----------------------------------------------+----------+ | project-access | boolean | (optional) Flag to mark a secret private so | None | | | | that the user who created the secret and | | | | | ``users`` specified in above list can only | | | | | access the secret. Pass `false` to mark the | | | | | secret private. | | +----------------------------+----------+-----------------------------------------------+----------+ Request/Response (Updating project-access flag): ************************************************ .. code-block:: javascript PATCH /v1/secrets/{uuid}/acl Headers: Content-Type: application/json X-Auth-Token: {token_id} Body: { "read": { "project-access":false } } Response: HTTP/1.1 200 OK {"acl_ref": "https://{barbican_host}/v1/secrets/{uuid}/acl"} Request/Response (Removing all users from ACL): *********************************************** .. code-block:: javascript PATCH /v1/secrets/{uuid}/acl Headers: Content-Type: application/json X-Auth-Token: {token_id} Body: { "read": { "users":[] } } Response: HTTP/1.1 200 OK {"acl_ref": "https://{barbican_host}/v1/secrets/{uuid}/acl"} HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successfully updated secret ACL. | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Secret not found for the given UUID. | +------+-----------------------------------------------------------------------------+ | 415 | Unsupported Media Type. | +------+-----------------------------------------------------------------------------+ .. _delete_secret_acl: DELETE /v1/secrets/{uuid}/acl ############################# Delete ACL for a given secret. No content is returned in the case of successful deletion. Request/Response: ***************** .. code-block:: javascript DELETE /v1/secrets/{uuid}/acl Headers: X-Auth-Token: {token_id} Response: HTTP/1.1 200 OK HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successfully deleted secret ACL. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Secret not found for the given UUID. | +------+-----------------------------------------------------------------------------+ Container ACL API ================= .. _get_container_acl: GET /v1/containers/{uuid}/acl ############################# Retrieve the ACL settings for a given container. If no ACL is defined for that container, then `Default ACL `__ is returned. Request/Response (With ACL defined): ************************************ .. code-block:: javascript Request: GET /v1/containers/{uuid}/acl Headers: X-Auth-Token: {token_id} Response: HTTP/1.1 200 OK { "read":{ "updated":"2015-05-12T20:08:47.644264", "created":"2015-05-12T19:23:44.019168", "users":[ {user_id1}, {user_id2}, ..... ], "project-access":{project-access-flag} } } Request/Response (With no ACL defined): *************************************** .. code-block:: javascript Request: GET /v1/containers/{uuid}/acl Headers: X-Auth-Token: {token_id} Response: HTTP/1.1 200 OK { "read":{ "project-access": true } } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful request. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Container not found for the given UUID. | +------+-----------------------------------------------------------------------------+ .. _put_container_acl: PUT /v1/containers/{uuid}/acl ############################# Create new or replaces existing ACL for a given container. This call is used to add new ACL for an container. If the ACL is already set on a container, this method will replace it with the requested ACL settings. In case of create (first new explicit ACL) or replace existing ACL, 200 is returned in both cases. To delete existing users from an ACL definition, pass empty list [] for `users`. Returns an ACL reference in success case. Attributes ********** The ACL resource detailed in this page allows access to individual containers to be controlled. This access is configured via operations on those containers. Currently only the 'read' operation (which includes GET REST actions) is supported. +----------------------------+----------+-----------------------------------------------+----------+ | Attribute Name | Type | Description | Default | +============================+==========+===============================================+==========+ | read | parent | ACL data for read operation. | None | | | element | | | +----------------------------+----------+-----------------------------------------------+----------+ | users | [string] | (optional) List of user ids. This needs to be | [] | | | | a user id as returned by Keystone. | | +----------------------------+----------+-----------------------------------------------+----------+ | project-access | boolean | (optional) Flag to mark a container private | `true` | | | | so that the user who created the container and| | | | | ``users`` specified in above list can only | | | | | access the container. Pass `false` to mark the| | | | | container private. | | +----------------------------+----------+-----------------------------------------------+----------+ Request/Response (Set or Replace ACL): ************************************** .. code-block:: javascript PUT /v1/containers/{uuid}/acl Headers: Content-Type: application/json X-Auth-Token: {token_id} Body: { "read":{ "users":[ {user_id1}, {user_id2}, ..... ], "project-access":{project-access-flag} } } Response: HTTP/1.1 200 OK {"acl_ref": "https://{barbican_host}/v1/containers/{uuid}/acl"} HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successfully set/replaced container ACL. | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Container not found for the given UUID. | +------+-----------------------------------------------------------------------------+ | 415 | Unsupported Media Type. | +------+-----------------------------------------------------------------------------+ .. _patch_container_acl: PATCH /v1/containers/{uuid}/acl ############################### Update existing ACL for a given container. This method can be used to apply partial changes on existing ACL settings. Client can update `users` list and enable or disable `project-access` flag for existing ACL. List of provided users replaces existing users if any. For an existing list of provided users from an ACL definition, pass empty list [] for `users`. Returns an ACL reference in success case. .. note:: PATCH API support will be changing in near future. Attributes ********** +----------------------------+----------+-----------------------------------------------+----------+ | Attribute Name | Type | Description | Default | +============================+==========+===============================================+==========+ | read | parent | ACL data for read operation. | None | | | element | | | +----------------------------+----------+-----------------------------------------------+----------+ | users | [string] | (optional) List of user ids. This needs to be | None | | | | a user id as returned by Keystone. | | +----------------------------+----------+-----------------------------------------------+----------+ | project-access | boolean | (optional) Flag to mark a container private | None | | | | so that the user who created the container and| | | | | ``users`` specified in above list can only | | | | | access the container. Pass `false` to mark the| | | | | container private. | | +----------------------------+----------+-----------------------------------------------+----------+ Request/Response (Updating project-access flag): ************************************************ .. code-block:: javascript PATCH /v1/containers/{uuid}/acl Headers: Content-Type: application/json X-Auth-Token: {token_id} Body: { "read": { "project-access":false } } Response: HTTP/1.1 200 OK {"acl_ref": "https://{barbican_host}/v1/containers/{uuid}/acl"} Request/Response (Removing all users from ACL): *********************************************** .. code-block:: javascript PATCH /v1/containers/{uuid}/acl Headers: Content-Type: application/json X-Auth-Token: {token_id} Body: { "read": { "users":[] } } Response: HTTP/1.1 200 OK {"acl_ref": "https://{barbican_host}/v1/containers/{uuid}/acl"} HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successfully updated container ACL. | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Container not found for the given UUID. | +------+-----------------------------------------------------------------------------+ | 415 | Unsupported Media Type. | +------+-----------------------------------------------------------------------------+ .. _delete_container_acl: DELETE /v1/containers/{uuid}/acl ################################ Delete ACL for a given container. No content is returned in the case of successful deletion. Request/Response: ***************** .. code-block:: javascript DELETE /v1/containers/{uuid}/acl Headers: X-Auth-Token: {token_id} Response: HTTP/1.1 200 OK HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successfully deleted container ACL. | +------+-----------------------------------------------------------------------------+ | 401 | Missing or Invalid X-Auth-Token. Authentication required. | +------+-----------------------------------------------------------------------------+ | 403 | User does not have permission to access this resource. | +------+-----------------------------------------------------------------------------+ | 404 | Container not found for the given UUID. | +------+-----------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/secret_metadata.rst0000666000175100017510000003003013245511001023723 0ustar zuulzuul00000000000000******************************* Secret Metadata API - Reference ******************************* .. _get_secret_metadata: GET /v1/secrets/{uuid}/metadata ############################### Lists a secret's user-defined metadata. If a secret does not contain any user metadata, an empty list will be returned. Request: ******** .. code-block:: javascript GET /v1/secrets/{uuid}/metadata Headers: Accept: application/json X-Auth-Token: Response: ********* .. code-block:: javascript { 'metadata': { 'description': 'contains the AES key', 'geolocation': '12.3456, -98.7654' } } .. _secret_metadata_response_attributes: Response Attributes ******************* +----------+---------+--------------------------------------------------------------+ | Name | Type | Description | +==========+=========+==============================================================+ | metadata | list | Contains a list of the secret metadata's key/value pairs. | | | | The provided keys must be lowercase. If not they will be | | | | converted to lowercase. | +----------+---------+--------------------------------------------------------------+ .. _secret_metadata_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to access this | | | resource. | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | retrieve secret metadata. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ .. _put_secret_metadata: PUT /v1/secrets/{uuid}/metadata ################################ Sets the metadata for a secret. Any metadata that was previously set will be deleted and replaced with this metadata. Parameters ********** +----------+---------+--------------------------------------------------------------+ | Name | Type | Description | +==========+=========+==============================================================+ | metadata | list | Contains a list of the secret metadata's key/value pairs. | | | | The provided keys must be lowercase. If not they will be | | | | converted to lowercase. | +----------+---------+--------------------------------------------------------------+ Request: ******** .. code-block:: javascript PUT /v1/secrets/{uuid}/metadata Headers: Content-Type: application/json X-Auth-Token: Content: { 'metadata': { 'description': 'contains the AES key', 'geolocation': '12.3456, -98.7654' } } Response: ********* .. code-block:: javascript 201 OK { "metadata_ref": "https://{barbican_host}/v1/secrets/{secret_uuid}/metadata" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 201 | Successfully created/updated Secret Metadata | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to access this | | | resource. | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | create secret metadata. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ .. _get_secret_metadatum: GET /v1/secrets/{uuid}/metadata/{key} ##################################### Retrieves a secret's user-added metadata. Request: ***************** .. code-block:: javascript GET /v1/secrets/{uuid}/metadata/{key} Headers: Accept: application/json X-Auth-Token: Response: ****************** .. code-block:: javascript 200 OK { "key": "access-limit", "value": "0" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to access this | | | resource | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | retrieve secret metadata. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ .. _post_secret_metadatum: POST /v1/secrets/{uuid}/metadata/ ################################# Adds a new key/value pair to the secret's user metadata. The key sent in the request must not already exist in the metadata. The key must also be in lowercase, otherwise it will automatically be changed to lowercase. Request: ******** .. code-block:: javascript POST /v1/secrets/{uuid}/metadata/ Headers: X-Auth-Token: Content-Type: application/json Content: { "key": "access-limit", "value": "11" } Response: ********* .. code-block:: javascript 201 Created Secret Metadata Location: http://example.com:9311/v1/secrets/{uuid}/metadata/access-limit { "key": "access-limit", "value": "11" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 201 | Successful request | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to access this | | | resource. | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | create secret metadata. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ | 409 | Conflict. The provided metadata key already exists. | +------+-----------------------------------------------------------------------------+ .. _put_secret_metadatum: PUT /v1/secrets/{uuid}/metadata/{key} ##################################### Updates an existing key/value pair in the secret's user metadata. The key sent in the request must already exist in the metadata. The key must also be in lowercase, otherwise it will automatically be changed to lowercase. Request: ******** .. code-block:: javascript PUT /v1/secrets/{uuid}/metadata/{key} Headers: X-Auth-Token: Content-Type: application/json Content: { "key": "access-limit", "value": "11" } Response: ********* .. code-block:: javascript 200 OK { "key": "access-limit", "value": "11" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful request | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to access this | | | resource. | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | update secret metadata. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ .. _delete_secret_metadatum: DELETE /v1/secrets/{uuid}/metadata/{key} ######################################## Delete secret metadata by key. Request: ******** .. code-block:: javascript DELETE /v1/secrets/{uuid}/metadata/{key} Headers: X-Auth-Token: Response: ********* .. code-block:: javascript 204 No Content HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to access this | | | resource. | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | delete secret metadata. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/store_backends.rst0000666000175100017510000004272613245511001023603 0ustar zuulzuul00000000000000***************************** Secret Stores API - Reference ***************************** Barbican provides API to manage secret stores available in a deployment. APIs are provided for listing available secret stores and to manage project level secret store mapping. There are two types of secret stores. One is global default secret store which is used for all projects. And then project `preferred` secret store which is used to store all *new* secrets created in that project. For an introduction to multiple store backends support, see :doc:`Using Multiple Secret Store Plugins ` . This document will focus on the details of the Barbican `/v1/secret-stores` REST API. When multiple secret store backends support is not enabled in service configuration, then all of these API will return resource not found (http status code 404) error. Error message text will highlight that the support is not enabled in configuration. GET /v1/secret-stores ##################### Project administrator can request list of available secret store backends. Response contains list of secret stores which are currently configured in barbican deployment. If multiple store backends support is not enabled, then list will return resource not found (404) error. .. _get_secret_stores_request_response: Request/Response: ***************** .. code-block:: javascript Request: GET /secret-stores Headers: X-Auth-Token: "f9cf2d480ba3485f85bdb9d07a4959f1" Accept: application/json Response: HTTP/1.1 200 OK Content-Type: application/json { "secret_stores":[ { "status": "ACTIVE", "updated": "2016-08-22T23:46:45.114283", "name": "PKCS11 HSM", "created": "2016-08-22T23:46:45.114283", "secret_store_ref": "http://localhost:9311/v1/secret-stores/4d27b7a7-b82f-491d-88c0-746bd67dadc8", "global_default": True, "crypto_plugin": "p11_crypto", "secret_store_plugin": "store_crypto" }, { "status": "ACTIVE", "updated": "2016-08-22T23:46:45.124554", "name": "KMIP HSM", "created": "2016-08-22T23:46:45.124554", "secret_store_ref": "http://localhost:9311/v1/secret-stores/93869b0f-60eb-4830-adb9-e2f7154a080b", "global_default": False, "crypto_plugin": None, "secret_store_plugin": "kmip_plugin" }, { "status": "ACTIVE", "updated": "2016-08-22T23:46:45.127866", "name": "Software Only Crypto", "created": "2016-08-22T23:46:45.127866", "secret_store_ref": "http://localhost:9311/v1/secret-stores/0da45858-9420-42fe-a269-011f5f35deaa", "global_default": False, "crypto_plugin": "simple_crypto", "secret_store_plugin": "store_crypto" } } .. _get_secret_stores_response_attributes: Response Attributes ******************* +---------------+--------+---------------------------------------------+ | Name | Type | Description | +===============+========+=============================================+ | secret_stores | list | A list of secret store references | +---------------+--------+---------------------------------------------+ | name | string | store and crypto plugin name delimited by + | | | | (plus) sign. | +---------------+--------+---------------------------------------------+ | secret_store | string | URL for referencing a specific secret store | | _ref | | | +---------------+--------+---------------------------------------------+ .. _get_secret_stores_status_codes: HTTP Status Codes ***************** +------+--------------------------------------------------------------------------+ | Code | Description | +======+==========================================================================+ | 200 | Successful Request | +------+--------------------------------------------------------------------------+ | 401 | Authentication error. Missing or invalid X-Auth-Token. | +------+--------------------------------------------------------------------------+ | 403 | The user was authenticated, but is not authorized to perform this action | +------+--------------------------------------------------------------------------+ | 404 | Not Found. When multiple secret store backends support is not enabled. | +------+--------------------------------------------------------------------------+ GET /v1/secret-stores/{secret_store_id} ####################################### A project administrator (user with admin role) can request details of secret store by its ID. Returned response will highlight whether this secret store is currently configured as global default or not. .. _get_secret_stores_id_request_response: Request/Response: ***************** .. code-block:: javascript Request: GET /secret-stores/93869b0f-60eb-4830-adb9-e2f7154a080b Headers: X-Auth-Token: "f9cf2d480ba3485f85bdb9d07a4959f1" Accept: application/json Response: HTTP/1.1 200 OK Content-Type: application/json { "status": "ACTIVE", "updated": "2016-08-22T23:46:45.124554", "name": "KMIP HSM", "created": "2016-08-22T23:46:45.124554", "secret_store_ref": "http://localhost:9311/v1/secret-stores/93869b0f-60eb-4830-adb9-e2f7154a080b", "global_default": False, "crypto_plugin": None, "secret_store_plugin": "kmip_plugin" } .. _get_secret_stores_id_response_attributes: Response Attributes ******************* +------------------+---------+---------------------------------------------------------------+ | Name | Type | Description | +==================+=========+===============================================================+ | name | string | store and crypto plugin name delimited by '+' (plus) sign | +------------------+---------+---------------------------------------------------------------+ | global_default | boolean | flag indicating if this secret store is global default or not | +------------------+---------+---------------------------------------------------------------+ | status | list | Status of the secret store | +------------------+---------+---------------------------------------------------------------+ | updated | time | Date and time secret store was last updated | +------------------+---------+---------------------------------------------------------------+ | created | time | Date and time secret store was created | +------------------+---------+---------------------------------------------------------------+ | secret_store_ref | string | URL for referencing a specific secret store | +------------------+---------+---------------------------------------------------------------+ .. _get_secret_stores_id_status_codes: HTTP Status Codes ***************** +------+--------------------------------------------------------------------------+ | Code | Description | +======+==========================================================================+ | 200 | Successful Request | +------+--------------------------------------------------------------------------+ | 401 | Authentication error. Missing or invalid X-Auth-Token. | +------+--------------------------------------------------------------------------+ | 403 | The user was authenticated, but is not authorized to perform this action | +------+--------------------------------------------------------------------------+ | 404 | Not Found. When multiple secret store backends support is not enabled or | | | that secret store id does not exist. | +------+--------------------------------------------------------------------------+ GET /v1/secret-stores/preferred ############################### A project administrator (user with admin role) can request a reference to the preferred secret store if assigned previously. When a preferred secret store is set for a project, then new project secrets are stored using that store backend. If multiple secret store support is not enabled, then this resource will return 404 (Not Found) error. .. _get_secret_stores_preferred_request_response: Request/Response: ***************** .. code-block:: javascript Request: GET /v1/secret-stores/preferred Headers: X-Auth-Token: "f9cf2d480ba3485f85bdb9d07a4959f1" Accept: application/json Response: HTTP/1.1 200 OK Content-Type: application/json { "status": "ACTIVE", "updated": "2016-08-22T23:46:45.114283", "name": "PKCS11 HSM", "created": "2016-08-22T23:46:45.114283", "secret_store_ref": "http://localhost:9311/v1/secret-stores/4d27b7a7-b82f-491d-88c0-746bd67dadc8", "global_default": True, "crypto_plugin": "p11_crypto", "secret_store_plugin": "store_crypto" } .. _get_secret_stores_preferred_response_attributes: Response Attributes ******************* +------------------+--------+-----------------------------------------------+ | Name | Type | Description | +==================+========+===============================================+ | secret_store_ref | string | A URL that references a specific secret store | +------------------+--------+-----------------------------------------------+ .. _get_secret_stores_preferred_status_codes: HTTP Status Codes ***************** +------+--------------------------------------------------------------------------+ | Code | Description | +======+==========================================================================+ | 200 | Successful Request | +------+--------------------------------------------------------------------------+ | 401 | Authentication error. Missing or invalid X-Auth-Token. | +------+--------------------------------------------------------------------------+ | 403 | The user was authenticated, but is not authorized to perform this action | +------+--------------------------------------------------------------------------+ | 404 | Not found. No preferred secret store has been defined or multiple secret | | | store backends support is not enabled. | +------+--------------------------------------------------------------------------+ POST /v1/secret-stores/{secret_store_id}/preferred ################################################## A project administrator can set a secret store backend to be preferred store backend for his/her project. From there on, any new secret stored in that project will use specified plugin backend for storage and reading thereafter. Existing secret storage will not be impacted as each secret captures its plugin backend information when initially stored. If multiple secret store support is not enabled, then this resource will return 404 (Not Found) error. .. _post_secret_stores_id_preferred_request_response: Request/Response: ***************** .. code-block:: javascript Request: POST /v1/secret-stores/7776adb8-e865-413c-8ccc-4f09c3fe0213/preferred Headers: X-Auth-Token: "f9cf2d480ba3485f85bdb9d07a4959f1" Response: HTTP/1.1 204 No Content .. _post_secret_stores_id_preferred_status_codes: HTTP Status Codes ***************** +------+--------------------------------------------------------------------------+ | Code | Description | +======+==========================================================================+ | 204 | Successful Request | +------+--------------------------------------------------------------------------+ | 401 | Authentication error. Missing or invalid X-Auth-Token. | +------+--------------------------------------------------------------------------+ | 403 | The user was authenticated, but is not authorized to perform this action | +------+--------------------------------------------------------------------------+ | 404 | The requested entity was not found or multiple secret store backends | | | support is not enabled. | +------+--------------------------------------------------------------------------+ DELETE /v1/secret-stores/{secret_store_id}/preferred #################################################### A project administrator can remove preferred secret store backend setting. If multiple secret store support is not enabled, then this resource will return 404 (Not Found) error. .. _delete_secret_stores_id_preferred_request_response: Request/Response: ***************** .. code-block:: javascript Request: DELETE /v1/secret-stores/7776adb8-e865-413c-8ccc-4f09c3fe0213/preferred Headers: X-Auth-Token: "f9cf2d480ba3485f85bdb9d07a4959f1" Response: HTTP/1.1 204 No Content .. _delete_secret_stores_id_preferred_status_codes: HTTP Status Codes ***************** +------+--------------------------------------------------------------------------+ | Code | Description | +======+==========================================================================+ | 204 | Successful Request | +------+--------------------------------------------------------------------------+ | 401 | Authentication error. Missing or invalid X-Auth-Token. | +------+--------------------------------------------------------------------------+ | 403 | The user was authenticated, but is not authorized to perform this action | +------+--------------------------------------------------------------------------+ | 404 | The requested entity was not found or multiple secret store backends | | | support is not enabled. | +------+--------------------------------------------------------------------------+ GET /v1/secret-stores/global-default #################################### A project or service administrator can request a reference to the secret store that is used as default secret store backend for the deployment. .. _get_secret_stores_global_default_request_response: Request/Response: ***************** .. code-block:: javascript Request: GET /v1/secret-stores/global-default Headers: X-Auth-Token: "f9cf2d480ba3485f85bdb9d07a4959f1" Accept: application/json Response: HTTP/1.1 200 OK Content-Type: application/json { "status": "ACTIVE", "updated": "2016-08-22T23:46:45.114283", "name": "PKCS11 HSM", "created": "2016-08-22T23:46:45.114283", "secret_store_ref": "http://localhost:9311/v1/secret-stores/4d27b7a7-b82f-491d-88c0-746bd67dadc8", "global_default": True, "crypto_plugin": "p11_crypto", "secret_store_plugin": "store_crypto" } .. _get_secret_stores_global_default_response_attributes: Response Attributes ******************* +------------------+--------+-----------------------------------------------+ | Name | Type | Description | +==================+========+===============================================+ | secret_store_ref | string | A URL that references a specific secret store | +------------------+--------+-----------------------------------------------+ .. _get_secret_stores_global_default_status_codes: HTTP Status Codes ***************** +------+--------------------------------------------------------------------------+ | Code | Description | +======+==========================================================================+ | 200 | Successful Request | +------+--------------------------------------------------------------------------+ | 401 | Authentication error. Missing or invalid X-Auth-Token. | +------+--------------------------------------------------------------------------+ | 403 | The user was authenticated, but is not authorized to perform this action | +------+--------------------------------------------------------------------------+ | 404 | Not Found. When multiple secret store backends support is not enabled. | +------+--------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/orders.rst0000666000175100017510000003374613245511001022115 0ustar zuulzuul00000000000000********************** Orders API - Reference ********************** .. warning:: DEPRECATION WARNING: The Certificates Order resource has been deprecated and will be removed in the P release. .. _get_orders: GET /v1/orders ############## Lists a project's orders. The list of orders can be filtered by the parameters passed in via the URL. .. _get_order_parameters: Parameters ********** +----------+---------+----------------------------------------------------------------+ | Name | Type | Description | +==========+=========+================================================================+ | offset | integer | The starting index within the total list of the orders that | | | | you would like to retrieve. (Default is 0) | +----------+---------+----------------------------------------------------------------+ | limit | integer | The maximum number of records to return (up to 100). | | | | (Default is 10) | +----------+---------+----------------------------------------------------------------+ .. _get_orders_request: Request: ******** .. code-block:: javascript GET /v1/orders Headers: Content-Type: application/json X-Auth-Token: {token} .. _get_orders_response: Response: ********* .. code-block:: none 200 Success { "orders": [ { "created": "2015-10-20T18:38:44", "creator_id": "40540f978fbd45c1af18910e3e02b63f", "meta": { "algorithm": "AES", "bit_length": 256, "expiration": null, "mode": "cbc", "name": "secretname", "payload_content_type": "application/octet-stream" }, "order_ref": "http://localhost:9311/v1/orders/2284ba6f-f964-4de7-b61e-c413df5d1e47", "secret_ref": "http://localhost:9311/v1/secrets/15dcf8e4-3138-4360-be9f-fc4bc2e64a19", "status": "ACTIVE", "sub_status": "Unknown", "sub_status_message": "Unknown", "type": "key", "updated": "2015-10-20T18:38:44" }, { "created": "2015-10-20T18:38:47", "creator_id": "40540f978fbd45c1af18910e3e02b63f", "meta": { "algorithm": "AES", "bit_length": 256, "expiration": null, "mode": "cbc", "name": "secretname", "payload_content_type": "application/octet-stream" }, "order_ref": "http://localhost:9311/v1/orders/87b7169e-3aa2-4cb1-8800-b5aadf6babd1", "secret_ref": "http://localhost:9311/v1/secrets/80183f4b-c0de-4a94-91ad-6d55251acee2", "status": "ACTIVE", "sub_status": "Unknown", "sub_status_message": "Unknown", "type": "key", "updated": "2015-10-20T18:38:47" } ], "total": 2 } .. _get_order_response_attributes: Response Attributes ******************* +----------+---------+--------------------------------------------------------------+ | Name | Type | Description | +==========+=========+==============================================================+ | orders | list | Contains a list of dictionaries filled with order metadata. | +----------+---------+--------------------------------------------------------------+ | total | integer | The total number of orders available to the user. | +----------+---------+--------------------------------------------------------------+ | next | string | A HATEOS URL to retrieve the next set of objects based on | | | | the offset and limit parameters. This attribute is only | | | | available when the total number of objects is greater than | | | | offset and limit parameter combined. | +----------+---------+--------------------------------------------------------------+ | previous | string | A HATEOS URL to retrieve the previous set of objects based | | | | on the offset and limit parameters. This attribute is only | | | | available when the request offset is greater than 0. | +----------+---------+--------------------------------------------------------------+ .. _get_order_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ .. _post_orders: POST /v1/orders ############### Creates an order Parameters ********** +----------------------------+---------+----------------------------------------------+------------+ | Attribute Name | Type | Description | Default | +============================+=========+==============================================+============+ | type | string | The type of key to be generated. Valid types | None | | | | are key, asymmetric, and certificate | | +----------------------------+---------+----------------------------------------------+------------+ | meta | | Dictionary containing the secret metadata | None | | | dict | used to generate the secret. | | | | | | | +----------------------------+---------+----------------------------------------------+------------+ .. _post_orders_request: Request: ******** .. code-block:: javascript POST /v1/orders Headers: Content-Type: application/json X-Auth-Token: {token} Content: { "type":"key", "meta": { "name":"secretname", "algorithm": "AES", "bit_length": 256, "mode": "cbc", "payload_content_type":"application/octet-stream" } } .. _post_orders_response: Response: ********* .. code-block:: none 202 Created { "order_ref": "http://{barbican_host}/v1/orders/{order_uuid}" } .. _post_orders_response_attributes: Response Attributes ******************* +----------+---------+--------------------------------------------------------------+ | Name | Type | Description | +==========+=========+==============================================================+ | order_ref| string | Order reference | +----------+---------+--------------------------------------------------------------+ .. _post_orders_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 202 | Successfully created an order | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 415 | Unsupported media-type | +------+-----------------------------------------------------------------------------+ .. _get_unique_order_metadata: GET /v1/orders/{uuid} ##################### Retrieves an order's metadata .. _get_unique_order_request: Request: ******** .. code-block:: javascript GET /v1/orders/{order_uuid} Headers: Accept: application/json X-Auth-Token: {token} Parameters ********** None .. _get_unique_order_response: Response: ********* .. code-block:: javascript 200 Success { "created": "2015-10-20T18:49:02", "creator_id": "40540f978fbd45c1af18910e3e02b63f", "meta": { "algorithm": "AES", "bit_length": 256, "expiration": null, "mode": "cbc", "name": "secretname", "payload_content_type": "application/octet-stream" }, "order_ref": "http://localhost:9311/v1/orders/5443d349-fe0c-4bfd-bd9d-99c4a9770638", "secret_ref": "http://localhost:9311/v1/secrets/16f8d4f3-d3dd-4160-a5bd-8e5095a42613", "status": "ACTIVE", "sub_status": "Unknown", "sub_status_message": "Unknown", "type": "key", "updated": "2015-10-20T18:49:02" } .. _get_unique_order_response_attributes: Response Attributes ******************* +--------------------+---------+----------------------------------------------------+ | Name | Type | Description | +====================+=========+====================================================+ | created | string | Timestamp in ISO8601 format of when the order was | | | | created | +--------------------+---------+----------------------------------------------------+ | creator_id | string | Keystone Id of the user who created the order | +--------------------+---------+----------------------------------------------------+ | meta | dict | Secret metadata used for informational purposes | +--------------------+---------+----------------------------------------------------+ | order_ref | string | Order href associated with the order | +--------------------+---------+----------------------------------------------------+ | secret_ref | string | Secret href associated with the order | +--------------------+---------+----------------------------------------------------+ | status | string | Current status of the order | +--------------------+---------+----------------------------------------------------+ | sub_status | string | Metadata associated with the order | +--------------------+---------+----------------------------------------------------+ | sub_status_message | string | Metadata associated with the order | +--------------------+---------+----------------------------------------------------+ | type | string | Indicates the type of order | +--------------------+---------+----------------------------------------------------+ | updated | string | Timestamp in ISO8601 format of the last time the | | | | order was updated. | +--------------------+---------+----------------------------------------------------+ .. _get_unique_orders_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successfully retrieved the order | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ .. _delete_unique_order: DELETE /v1/orders/{uuid} ######################## Delete an order .. _delete_order_request: Request: ******** .. code-block:: javascript DELETE /v1/orders/{order_uuid} Headers: X-Auth-Token: {token} Parameters ********** None .. _delete_order_response: Response: ********* .. code-block:: javascript 204 Success .. _delete_order_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successfully deleted the order | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/consumers.rst0000666000175100017510000002631413245511001022626 0ustar zuulzuul00000000000000************************* Consumers API - Reference ************************* GET {container_ref}/consumers ############################# Lists a container's consumers. The list of consumers can be filtered by the parameters passed in via the URL. .. _consumer_parameters: Parameters ********** +----------+---------+----------------------------------------------------------------+ | Name | Type | Description | +==========+=========+================================================================+ | offset | integer | The starting index within the total list of the consumers that | | | | you would like to retrieve. | +----------+---------+----------------------------------------------------------------+ | limit | integer | The maximum number of records to return (up to 100). The | | | | default limit is 10. | +----------+---------+----------------------------------------------------------------+ Request: ******** .. code-block:: javascript GET {container_ref}/consumers Headers: X-Auth-Token: Response: ********* .. code-block:: javascript 200 OK { "total": 3, "consumers": [ { "status": "ACTIVE", "URL": "consumerurl", "updated": "2015-10-15T21:06:33.123878", "name": "consumername", "created": "2015-10-15T21:06:33.123872" }, { "status": "ACTIVE", "URL": "consumerURL2", "updated": "2015-10-15T21:17:08.092416", "name": "consumername2", "created": "2015-10-15T21:17:08.092408" }, { "status": "ACTIVE", "URL": "consumerURL3", "updated": "2015-10-15T21:21:29.970370", "name": "consumername3", "created": "2015-10-15T21:21:29.970365" } ] } Request: ******** .. code-block:: javascript GET {container_ref}/consumers?limit=1\&offset=1 Headers: X-Auth-Token: .. code-block:: javascript { "total": 3, "next": "http://localhost:9311/v1/containers/{container_ref}/consumers?limit=1&offset=2", "consumers": [ { "status": "ACTIVE", "URL": "consumerURL2", "updated": "2015-10-15T21:17:08.092416", "name": "consumername2", "created": "2015-10-15T21:17:08.092408" } ], "previous": "http://localhost:9311/v1/containers/{container_ref}/consumers?limit=1&offset=0" } .. _consumer_response_attributes: Response Attributes ******************* +-----------+---------+----------------------------------------------------------------+ | Name | Type | Description | +===========+=========+================================================================+ | consumers | list | Contains a list of dictionaries filled with consumer metadata. | +-----------+---------+----------------------------------------------------------------+ | total | integer | The total number of consumers available to the user. | +-----------+---------+----------------------------------------------------------------+ | next | string | A HATEOAS URL to retrieve the next set of consumers based on | | | | the offset and limit parameters. This attribute is only | | | | available when the total number of consumers is greater than | | | | offset and limit parameter combined. | +-----------+---------+----------------------------------------------------------------+ | previous | string | A HATEOAS URL to retrieve the previous set of consumers based | | | | on the offset and limit parameters. This attribute is only | | | | available when the request offset is greater than 0. | +-----------+---------+----------------------------------------------------------------+ .. _consumer_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | OK. | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource.| +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | delete a consumer. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ .. _post_consumers: POST {container_ref}/consumers ############################## Creates a consumer Attributes ********** +----------------------------+---------+----------------------------------------------+------------+ | Attribute Name | Type | Description | Default | +============================+=========+==============================================+============+ | name | string | The name of the consumer set by the user. | None | +----------------------------+---------+----------------------------------------------+------------+ | url | string | The URL for the user or service using the | None | | | | container. | | +----------------------------+---------+----------------------------------------------+------------+ Request: ******** .. code-block:: javascript POST {container_ref}/consumers Headers: X-Auth-Token: Content-Type: application/json Content: { "name": "ConsumerName", "url": "ConsumerURL" } Response: ********* .. code-block:: javascript 200 OK { "status": "ACTIVE", "updated": "2015-10-15T17:56:18.626724", "name": "container name", "consumers": [ { "URL": "consumerURL", "name": "consumername" } ], "created": "2015-10-15T17:55:44.380002", "container_ref": "http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9", "creator_id": "b17c815d80f946ea8505c34347a2aeba", "secret_refs": [ { "secret_ref": "http://localhost:9311/v1/secrets/b61613fc-be53-4696-ac01-c3a789e87973", "name": "private_key" } ], "type": "generic" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | OK. | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request. | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource.| +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | create a consumer. This can be based on the user's role or the | | | project's quota. | +------+-----------------------------------------------------------------------------+ .. _delete_consumer: DELETE {container_ref}/consumers ################################ Delete a consumer. Attributes ********** +----------------------------+---------+----------------------------------------------+------------+ | Attribute Name | Type | Description | Default | +============================+=========+==============================================+============+ | name | string | The name of the consumer set by the user. | None | +----------------------------+---------+----------------------------------------------+------------+ | URL | string | The URL for the user or service using the | None | | | | container. | | +----------------------------+---------+----------------------------------------------+------------+ Request: ******** .. code-block:: javascript DELETE {container_ref}/consumers Headers: X-Auth-Token: Content-Type: application/json Content: { "name": "ConsumerName", "URL": "ConsumerURL" } Response: ********* .. code-block:: javascript 200 OK { "status": "ACTIVE", "updated": "2015-10-15T17:56:18.626724", "name": "container name", "consumers": [], "created": "2015-10-15T17:55:44.380002", "container_ref": "http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9", "creator_id": "b17c815d80f946ea8505c34347a2aeba", "secret_refs": [ { "secret_ref": "http://localhost:9311/v1/secrets/b61613fc-be53-4696-ac01-c3a789e87973", "name": "private_key" } ], "type": "generic" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | OK. | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request. | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource.| +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | delete a consumer. This can be based on the user's role. | +------+-----------------------------------------------------------------------------+ | 404 | Consumer Not Found. | +------+-----------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/containers.rst0000666000175100017510000003715313245511001022760 0ustar zuulzuul00000000000000************************** Containers API - Reference ************************** GET /v1/containers ################## Lists a project's containers. Returned containers will be ordered by creation date; oldest to newest. Parameters ********** +--------+---------+------------------------------------------------------------+ | Name | Type | Description | +========+=========+============================================================+ | offset | integer | The starting index within the total list of the containers | | | | that you would like to retrieve. | +--------+---------+------------------------------------------------------------+ | limit | integer | The maximum number of containers to return (up to 100). | | | | The default limit is 10. | +--------+---------+------------------------------------------------------------+ Response Attributes ******************* +------------+---------+--------------------------------------------------------+ | Name | Type | Description | +============+=========+========================================================+ | containers | list | Contains a list of dictionaries filled with container | | | | data | +------------+---------+--------------------------------------------------------+ | total | integer | The total number of containers available to the user | +------------+---------+--------------------------------------------------------+ | next | string | A HATEOAS URL to retrieve the next set of containers | | | | based on the offset and limit parameters. This | | | | attribute is only available when the total number of | | | | containers is greater than offset and limit parameter | | | | combined. | +------------+---------+--------------------------------------------------------+ | previous | string | A HATEOAS URL to retrieve the previous set of | | | | containers based on the offset and limit parameters. | | | | This attribute is only available when the request | | | | offset is greater than 0. | +------------+---------+--------------------------------------------------------+ Request: ******** .. code-block:: javascript GET /v1/containers Headers: X-Auth-Token: Response: ********* .. code-block:: javascript { "containers": [ { "consumers": [], "container_ref": "https://{barbican_host}/v1/containers/{uuid}", "created": "2015-03-26T21:10:45.417835", "name": "container name", "secret_refs": [ { "name": "private_key", "secret_ref": "https://{barbican_host}/v1/secrets/{uuid}" } ], "status": "ACTIVE", "type": "generic", "updated": "2015-03-26T21:10:45.417835" } ], "total": 1 } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ GET /v1/containers/{uuid} ######################### Retrieves a single container. Response Attributes ******************* +-------------+--------+---------------------------------------------------------+ | Name | Type | Description | +=============+========+=========================================================+ | name | string | (optional) Human readable name for the container | +-------------+--------+---------------------------------------------------------+ | type | string | Type of container. Options: generic, rsa, certificate | +-------------+--------+---------------------------------------------------------+ | secret_refs | list | A list of dictionaries containing references to secrets | +-------------+--------+---------------------------------------------------------+ Request: ******** .. code-block:: javascript GET /v1/containers/{uuid} Headers: X-Auth-Token: Response: ********* .. code-block:: javascript { "type": "generic", "status": "ACTIVE", "name": "container name", "consumers": [], "container_ref": "https://{barbican_host}/v1/containers/{uuid}", "secret_refs": [ { "name": "private_key", "secret_ref": "https://{barbican_host}/v1/secrets/{uuid}" } ], "created": "2015-03-26T21:10:45.417835", "updated": "2015-03-26T21:10:45.417835" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Container not found or unavailable | +------+-----------------------------------------------------------------------------+ POST /v1/containers ################### Create a container There are three different types of containers that can be created: generic, rsa, and certificate. **Generic** This type of container holds any number of references to secrets. Each secret reference is accompanied by a name. Unlike other container types, no specific restrictions are enforced on the contents name attribute. **RSA** This type of container is designed to hold references to only three different secrets. These secrets are enforced by their accompanied names: public_key, private_key, and private_key_passphrase. **Certificate** This type of container is designed to hold a reference to a certificate and optionally private_key, private_key_passphrase, and intermediates. Request Attributes ****************** +-------------+--------+-----------------------------------------------------------+ | Name | Type | Description | +=============+========+===========================================================+ | name | string | (optional) Human readable name for identifying your | | | | container | +-------------+--------+-----------------------------------------------------------+ | type | string | Type of container. Options: generic, rsa, certificate | +-------------+--------+-----------------------------------------------------------+ | secret_refs | list | A list of dictionaries containing references to secrets | +-------------+--------+-----------------------------------------------------------+ Request: ******** .. code-block:: javascript POST /v1/containers Headers: X-Auth-Token: Content: { "type": "generic", "name": "container name", "secret_refs": [ { "name": "private_key", "secret_ref": "https://{barbican_host}/v1/secrets/{secret_uuid}" } ] } Response: ********* .. code-block:: javascript { "container_ref": "https://{barbican_host}/v1/containers/{container_uuid}" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 201 | Successful creation of the container | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | create a container. This can be based on the user's role or the | | | project's quota. | +------+-----------------------------------------------------------------------------+ DELETE /v1/containers/{uuid} ############################ Deletes a container Request: ******** .. code-block:: javascript DELETE /v1/containers/{container_uuid} Headers: X-Auth-Token: Response: ********* .. code-block:: javascript 204 No Content HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successful deletion of a container | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Container not found or unavailable | +------+-----------------------------------------------------------------------------+ POST /v1/containers/{container_uuid}/secrets ############################################ Add a secret to an existing container. This is only supported on generic containers. Request Attributes ****************** +------------+--------+------------------------------------------------------------+ | Name | Type | Description | +============+========+============================================================+ | name | string | (optional) Human readable name for identifying your secret | | | | within the container. | +------------+--------+------------------------------------------------------------+ | secret_ref | uri | (required) Full URI reference to an existing secret. | +------------+--------+------------------------------------------------------------+ Request: ******** .. code-block:: javascript POST /v1/containers/{container_uuid}/secrets Headers: X-Project-Id: {project_id} Content: { "name": "private_key", "secret_ref": "https://{barbican_host}/v1/secrets/{secret_uuid}" } Response: ********* .. code-block:: javascript { "container_ref": "https://{barbican_host}/v1/containers/{container_uuid}" } Note that the requesting 'container_uuid' is the same as that provided in the response. HTTP Status Codes ***************** In general, error codes produced by the containers POST call pertain here as well, especially in regards to the secret references that can be provided. +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 201 | Successful update of the container | +------+-----------------------------------------------------------------------------+ | 400 | Missing secret_ref | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | add the secret to the specified container. This can be based on the user's | | | role or the project's quota. | +------+-----------------------------------------------------------------------------+ DELETE /v1/containers/{container_uuid}/secrets ############################################## Remove a secret from a container. This is only supported on generic containers. Request Attributes ****************** +------------+--------+------------------------------------------------------------+ | Name | Type | Description | +============+========+============================================================+ | name | string | (optional) Human readable name for identifying your secret | | | | within the container. | +------------+--------+------------------------------------------------------------+ | secret_ref | uri | (required) Full URI reference to an existing secret. | +------------+--------+------------------------------------------------------------+ Request: ******** .. code-block:: javascript DELETE /v1/containers/{container_uuid}/secrets Headers: X-Project-Id: {project_id} Content: { "name": "private key", "secret_ref": "https://{barbican_host}/v1/secrets/{secret_uuid}" } Response: ********* .. code-block:: javascript 204 No Content HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successful removal of the secret from the container. | +------+-----------------------------------------------------------------------------+ | 400 | Missing secret_ref | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | remove the secret from the specified container. This can be based on the | | | user's role or the project's quota. | +------+-----------------------------------------------------------------------------+ | 404 | Specified secret_ref is not found in the container. | +------+-----------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/secrets.rst0000666000175100017510000005405313245511001022261 0ustar zuulzuul00000000000000*********************** Secrets API - Reference *********************** GET /v1/secrets ############### Lists a project's secrets. The list of secrets can be filtered by the parameters passed in via the URL. The actual secret payload data will not be listed here. Clients must instead make a separate call to retrieve the secret payload data for each individual secret. .. _secret_parameters: Parameters ********** +-------------+---------+-----------------------------------------------------------------+ | Name | Type | Description | +=============+=========+=================================================================+ | offset | integer | The starting index within the total list of the secrets that | | | | you would like to retrieve. | +-------------+---------+-----------------------------------------------------------------+ | limit | integer | The maximum number of records to return (up to 100). The | | | | default limit is 10. | +-------------+---------+-----------------------------------------------------------------+ | name | string | Selects all secrets with name similar to this value. | +-------------+---------+-----------------------------------------------------------------+ | alg | string | Selects all secrets with algorithm similar to this value. | +-------------+---------+-----------------------------------------------------------------+ | mode | string | Selects all secrets with mode similar to this value. | +-------------+---------+-----------------------------------------------------------------+ | bits | integer | Selects all secrets with bit_length equal to this value. | +-------------+---------+-----------------------------------------------------------------+ | secret_type | string | Selects all secrets with secret_type equal to this value. | +-------------+---------+-----------------------------------------------------------------+ | acl_only | boolean | Selects all secrets with an ACL that contains the user. | | | | Project scope is ignored. | +-------------+---------+-----------------------------------------------------------------+ | created | string | Date filter to select all secrets with `created` matching the | | | | specified criteria. See Date Filters below for more detail. | +-------------+---------+-----------------------------------------------------------------+ | updated | string | Date filter to select all secrets with `updated` matching the | | | | specified criteria. See Date Filters below for more detail. | +-------------+---------+-----------------------------------------------------------------+ | expiration | string | Date filter to select all secrets with `expiration` matching | | | | the specified criteria. See Date Filters below for more detail. | +-------------+---------+-----------------------------------------------------------------+ | sort | string | Determines the sorted order of the returned list. See Sorting | | | | below for more detail. | +-------------+---------+-----------------------------------------------------------------+ Date Filters: ************* The values for the ``created``, ``updated``, and ``expiration`` parameters are comma-separated lists of time stamps in ISO 8601 format. The time stamps can be prefixed with any of these comparison operators: ``gt:`` (greater-than), ``gte:`` (greater-than-or-equal), ``lt:`` (less-than), ``lte:`` (less-than-or-equal). For example, to get a list of secrets that will expire in January of 2020: .. code-block:: none GET /v1/secrets?expiration=gte:2020-01-01T00:00:00,lt:2020-02-01T00:00:00 Sorting: ******** The value of the ``sort`` parameter is a comma-separated list of sort keys. Supported sort keys include ``created``, ``expiration``, ``mode``, ``name``, ``secret_type``, ``status``, and ``updated``. Each sort key may also include a direction. Supported directions are ``:asc`` for ascending and ``:desc`` for descending. The service will use ``:asc`` for every key that does not include a direction. For example, to sort the list from most recently created to oldest: .. code-block:: none GET /v1/secrets?sort=created:desc Request: ******** .. code-block:: javascript GET /v1/secrets?offset=1&limit=2&sort=created Headers: Accept: application/json X-Auth-Token: {keystone_token} (or X-Project-Id: {project id}) Response: ********* .. code-block:: javascript { "next": "http://{barbican_host}:9311/v1/secrets?limit=2&offset=3", "previous": "http://{barbican_host}:9311/v1/secrets?limit=2&offset=0", "secrets": [ { "algorithm": null, "bit_length": null, "content_types": { "default": "application/octet-stream" }, "created": "2015-04-07T03:37:19.805835", "creator_id": "3a7e3d2421384f56a8fb6cf082a8efab", "expiration": null, "mode": null, "name": "opaque octet-stream base64", "secret_ref": "http://{barbican_host}:9311/v1/secrets/{uuid}", "secret_type": "opaque", "status": "ACTIVE", "updated": "2015-04-07T03:37:19.808337" }, { "algorithm": null, "bit_length": null, "content_types": { "default": "application/octet-stream" }, "created": "2015-04-07T03:41:02.184159", "creator_id": "3a7e3d2421384f56a8fb6cf082a8efab", "expiration": null, "mode": null, "name": "opaque random octet-stream base64", "secret_ref": "http://{barbican_host}:9311/v1/secrets/{uuid}", "secret_type": "opaque", "status": "ACTIVE", "updated": "2015-04-07T03:41:02.187823" } ], "total": 5 } .. _secret_response_attributes: Response Attributes ******************* +----------+---------+--------------------------------------------------------------+ | Name | Type | Description | +==========+=========+==============================================================+ | secrets | list | Contains a list of secrets. The attributes in the secret | | | | objects are the same as for an individual secret. | +----------+---------+--------------------------------------------------------------+ | total | integer | The total number of secrets available to the user. | +----------+---------+--------------------------------------------------------------+ | next | string | A HATEOAS URL to retrieve the next set of secrets based on | | | | the offset and limit parameters. This attribute is only | | | | available when the total number of secrets is greater than | | | | offset and limit parameter combined. | +----------+---------+--------------------------------------------------------------+ | previous | string | A HATEOAS URL to retrieve the previous set of secrets based | | | | on the offset and limit parameters. This attribute is only | | | | available when the request offset is greater than 0. | +----------+---------+--------------------------------------------------------------+ .. _secret_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ .. _post_secrets: POST /v1/secrets ################ Creates a Secret entity. If the ``payload`` attribute is not included in the request, then only the metadata for the secret is created, and a subsequent PUT request is required. Attributes ********** +----------------------------+---------+-----------------------------------------------------+------------+ | Attribute Name | Type | Description | Default | +============================+=========+=====================================================+============+ | name | string | (optional) The name of the secret set by the | None | | | | user. | | +----------------------------+---------+-----------------------------------------------------+------------+ | expiration | string | (optional) This is a UTC timestamp in ISO | None | | | | 8601 format ``YYYY-MM-DDTHH:MM:SSZ``. If | | | | | set, the secret will not be available after | | | | | this time. | | +----------------------------+---------+-----------------------------------------------------+------------+ | algorithm | string | (optional) Metadata provided by a user or | None | | | | system for informational purposes. | | +----------------------------+---------+-----------------------------------------------------+------------+ | bit_length | integer | (optional) Metadata provided by a user or | None | | | | system for informational purposes. Value | | | | | must be greater than zero. | | +----------------------------+---------+-----------------------------------------------------+------------+ | mode | string | (optional) Metadata provided by a user or | None | | | | system for informational purposes. | | +----------------------------+---------+-----------------------------------------------------+------------+ | payload | string | (optional) The secret's data to be stored. | None | | | | ``payload_content_type`` must also be | | | | | supplied if payload is included. | | +----------------------------+---------+-----------------------------------------------------+------------+ | payload_content_type | string | (optional) (required if payload is included) | None | | | | The media type for the content of the | | | | | payload. For more information see | | | | | :doc:`Secret Types <../reference/secret_types>` | | +----------------------------+---------+-----------------------------------------------------+------------+ | payload_content_encoding | string | (optional) (required if payload is encoded) | None | | | | The encoding used for the payload to be able | | | | | to include it in the JSON request. | | | | | Currently only ``base64`` is supported. | | +----------------------------+---------+-----------------------------------------------------+------------+ | secret_type | string | (optional) Used to indicate the type of | ``opaque`` | | | | secret being stored. For more information | | | | | see :doc:`Secret Types <../reference/secret_types>` | | +----------------------------+---------+-----------------------------------------------------+------------+ Request: ******** .. code-block:: javascript POST /v1/secrets Headers: Content-Type: application/json X-Auth-Token: Content: { "name": "AES key", "expiration": "2015-12-28T19:14:44.180394", "algorithm": "aes", "bit_length": 256, "mode": "cbc", "payload": "YmVlcg==", "payload_content_type": "application/octet-stream", "payload_content_encoding": "base64" } Response: ********* .. code-block:: javascript 201 Created { "secret_ref": "https://{barbican_host}/v1/secrets/{secret_uuid}" } HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 201 | Successfully created a Secret | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 403 | Forbidden. The user has been authenticated, but is not authorized to | | | create a secret. This can be based on the user's role or the | | | project's quota. | +------+-----------------------------------------------------------------------------+ | 415 | Unsupported media-type | +------+-----------------------------------------------------------------------------+ GET /v1/secrets/{uuid} ###################### Retrieves a secret's metadata. Request: ***************** .. code-block:: javascript GET /v1/secrets/{uuid} Headers: Accept: application/json X-Auth-Token: {token} (or X-Project-Id: {project_id}) Response: ****************** .. code-block:: javascript 200 OK { "status": "ACTIVE", "created": "2015-03-23T20:46:51.650515", "updated": "2015-03-23T20:46:51.654116", "expiration": "2015-12-28T19:14:44.180394", "algorithm": "aes", "bit_length": 256, "mode": "cbc", "name": "AES key", "secret_ref": "https://{barbican_host}/v1/secrets/{secret_uuid}", "secret_type": "opaque", "content_types": { "default": "application/octet-stream" } } Payload Request: **************** .. warning:: DEPRECATION WARNING: Previous releases of the API allowed the payload to be retrieved from this same endpoint by changing the Accept header to be one of the values listed in the ``content_types`` attribute of the Secret metadata. This was found to be problematic in some situations, so new applications should make use of the :ref:`/v1/secrets/{uuid}/payload ` endpoint instead. .. code-block:: javascript GET /v1/secrets/{uuid} Headers: Accept: application/octet-stream X-Auth-Token: Payload Response: ***************** .. code-block:: javascript 200 OK beer HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ | 406 | Not Acceptable | +------+-----------------------------------------------------------------------------+ .. _put_secrets: PUT /v1/secrets/{uuid} ###################### Add the payload to an existing metadata-only secret, such as one made by sending a POST /v1/secrets request that does not include the ``payload`` attribute. .. note:: This action can only be done for a secret that doesn't have a payload. Headers ******* +------------------+-----------------------------------------------------------+------------+ | Name | Description | Default | +==================+===========================================================+============+ | Content-Type | Corresponds with the payload_content_type | text/plain | | | attribute of a normal secret creation request. | | +------------------+-----------------------------------------------------------+------------+ | Content-Encoding | (optional) Corresponds with the payload_content_encoding | None | | | attribute of a normal secret creation request. | | +------------------+-----------------------------------------------------------+------------+ Request: ******** .. code-block:: javascript PUT /v1/secrets/{uuid} Headers: X-Auth-Token: Content-Type: application/octet-stream Content-Encoding: base64 Content: YmxhaA== Response: ********* .. code-block:: javascript 204 No Content HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ .. _delete_secrets: DELETE /v1/secrets/{uuid} ######################### Delete a secret by uuid Request: ******** .. code-block:: javascript DELETE /v1/secrets/{uuid} Headers: X-Auth-Token: Response: ********* .. code-block:: javascript 204 No Content HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ .. _secret_payload: GET /v1/secrets/{uuid}/payload ############################## Retrieve a secret's payload Accept Header Options: ********************** When making a request for a secret's payload, you must set the accept header to one of the values listed in the ``content_types`` attribute of a secret's metadata. Request: ******** .. code-block:: javascript GET /v1/secrets/{uuid}/payload Headers: Accept: text/plain X-Auth-Token: Response: ********* .. code-block:: javascript 200 OK beer HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ | 406 | Not Acceptable | +------+-----------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/quotas.rst0000666000175100017510000004771013245511001022127 0ustar zuulzuul00000000000000********************** Quotas API - Reference ********************** GET /v1/quotas ############## Get the effective quotas for the project of the requester. The project id of the requester is derived from the authentication token provided in the X-Auth-Token header. .. _get_quotas_request: Request/Response: ***************** .. code-block:: javascript Request: GET /v1/quotas Headers: X-Auth-Token: Accept: application/json Response: HTTP/1.1 200 OK Content-Type: application/json { "quotas": { "secrets": 10, "orders": 20, "containers": 10, "consumers": -1, "cas": 5 } } .. _get_quotas_response_attributes: Response Attributes ******************* +------------+---------+--------------------------------------------------------------+ | Name | Type | Description | +============+=========+==============================================================+ | quotas | dict | Contains a dictionary with quota information | +------------+---------+--------------------------------------------------------------+ | secrets | integer | Contains the effective quota value of the current project | | | | for the secret resource. | +------------+---------+--------------------------------------------------------------+ | orders | integer | Contains the effective quota value of the current project | | | | for the orders resource. | +------------+---------+--------------------------------------------------------------+ | containers | integer | Contains the effective quota value of the current project | | | | for the containers resource. | +------------+---------+--------------------------------------------------------------+ | consumers | integer | Contains the effective quota value of the current project | | | | for the consumers resource. | +------------+---------+--------------------------------------------------------------+ | cas | integer | Contains the effective quota value of the current project | | | | for the CAs resource. | +------------+---------+--------------------------------------------------------------+ Effective quota values are interpreted as follows: +-------+-----------------------------------------------------------------------------+ | Value | Description | +=======+=============================================================================+ | -1 | A negative value indicates the resource is unconstrained by a quota. | +-------+-----------------------------------------------------------------------------+ | 0 | A zero value indicates that the resource is disabled. | +-------+-----------------------------------------------------------------------------+ | int | A positive value indicates the maximum number of that resource that can be | | | created for the current project. | +-------+-----------------------------------------------------------------------------+ .. _get_quotas_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ .. _get_project_quotas: GET /v1/project-quotas ###################### Gets a list of configured project quota records. Paging is supported using the optional parameters offset and limit. .. _get_project_quotas_request: Request/Response: ***************** .. code-block:: javascript Request: GET /v1/project-quotas Headers: X-Auth-Token: Accept: application/json Response: 200 OK Content-Type: application/json { "project_quotas": [ { "project_id": "1234", "project_quotas": { "secrets": 2000, "orders": 0, "containers": -1, "consumers": null, "cas": null } }, { "project_id": "5678", "project_quotas": { "secrets": 200, "orders": 100, "containers": -1, "consumers": null, "cas": null } }, ], "total" : 30, } .. _get_project_quotas_parameters: Parameters ********** +--------+---------+----------------------------------------------------------------+ | Name | Type | Description | +========+=========+================================================================+ | offset | integer | The starting index within the total list of the project | | | | quotas that you would like to receive. | +--------+---------+----------------------------------------------------------------+ | limit | integer | The maximum number of records to return. | +--------+---------+----------------------------------------------------------------+ .. _get_project_quotas_response_attributes: Response Attributes ******************* +----------------+---------+--------------------------------------------------------------+ | Name | Type | Description | +================+=========+==============================================================+ | project-id | string | The UUID of a project with configured quota information. | +----------------+---------+--------------------------------------------------------------+ | project-quotas | dict | Contains a dictionary with project quota information. | +----------------+---------+--------------------------------------------------------------+ | secrets | integer | Contains the effective quota value of the current project | | | | for the secret resource. | +----------------+---------+--------------------------------------------------------------+ | orders | integer | Contains the effective quota value of the current project | | | | for the orders resource. | +----------------+---------+--------------------------------------------------------------+ | containers | integer | Contains the effective quota value of the current project | | | | for the containers resource. | +----------------+---------+--------------------------------------------------------------+ | consumers | integer | Contains the effective quota value of the current project | | | | for the consumers resource. | +----------------+---------+--------------------------------------------------------------+ | cas | integer | Contains the effective quota value of the current project | | | | for the CAs resource. | +----------------+---------+--------------------------------------------------------------+ | total | integer | The total number of configured project quotas records. | +----------------+---------+--------------------------------------------------------------+ | next | string | A HATEOAS URL to retrieve the next set of quotas based on | | | | the offset and limit parameters. This attribute is only | | | | available when the total number of secrets is greater than | | | | offset and limit parameter combined. | +----------------+---------+--------------------------------------------------------------+ | previous | string | A HATEOAS URL to retrieve the previous set of quotas based | | | | on the offset and limit parameters. This attribute is only | | | | available when the request offset is greater than 0. | +----------------+---------+--------------------------------------------------------------+ Configured project quota values are interpreted as follows: +-------+-----------------------------------------------------------------------------+ | Value | Description | +=======+=============================================================================+ | -1 | A negative value indicates the resource is unconstrained by a quota. | +-------+-----------------------------------------------------------------------------+ | 0 | A zero value indicates that the resource is disabled. | +-------+-----------------------------------------------------------------------------+ | int | A positive value indicates the maximum number of that resource that can be | | | created for the current project. | +-------+-----------------------------------------------------------------------------+ | null | A null value indicates that the default quota value for the resource | | | will be used as the quota for this resource in the current project. | +-------+-----------------------------------------------------------------------------+ .. _get_project_quotas_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ .. _get_project_quotas_uuid: GET /v1/project-quotas/{uuid} ############################# Retrieves a project's configured project quota information. .. _get_project_quotas_uuid_request: Request/Response: ***************** .. code-block:: javascript Request: GET /v1/project-quotas/{uuid} Headers: X-Auth-Token: Accept: application/json Response: 200 OK Content-Type: application/json { "project_quotas": { "secrets": 10, "orders": 20, "containers": -1, "consumers": 10, "cas": 5 } } .. _get_project_quotas_uuid_response_attributes: Response Attributes ******************* +----------------+---------+--------------------------------------------------------------+ | Name | Type | Description | +================+=========+==============================================================+ | project-quotas | dict | Contains a dictionary with project quota information. | +----------------+---------+--------------------------------------------------------------+ | secrets | integer | Contains the configured quota value of the requested project | | | | for the secret resource. | +----------------+---------+--------------------------------------------------------------+ | orders | integer | Contains the configured quota value of the requested project | | | | for the orders resource. | +----------------+---------+--------------------------------------------------------------+ | containers | integer | Contains the configured quota value of the requested project | | | | for the containers resource. | +----------------+---------+--------------------------------------------------------------+ | consumers | integer | Contains the configured quota value of the requested project | | | | for the consumers resource. | +----------------+---------+--------------------------------------------------------------+ | cas | integer | Contains the configured quota value of the requested project | | | | for the CAs resource. | +----------------+---------+--------------------------------------------------------------+ .. _get_project_quotas_uuid_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 200 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found. The requested project does not have any configured quotas. | +------+-----------------------------------------------------------------------------+ .. _put_project_quotas: PUT /v1/project-quotas/{uuid} ############################# Create or update the configured project quotas for the project with the specified UUID. .. _put_project_quotas_request: Request/Response: ***************** .. code-block:: javascript Request: PUT /v1/project-quotas/{uuid} Headers: X-Auth-Token: Content-Type: application/json Body:: { "project_quotas": { "secrets": 50, "orders": 10, "containers": 20 } } Response: 204 OK .. _put_project_quotas_request_attributes: Request Attributes ****************** +----------------+---------+----------------------------------------------+ | Attribute Name | Type | Description | +================+=========+==============================================+ | project-quotas | dict | A dictionary with project quota information. | +----------------+---------+----------------------------------------------+ | secrets | integer | The value to set for this project's secret | | | | quota. | +----------------+---------+----------------------------------------------+ | orders | integer | The value to set for this project's order | | | | quota. | +----------------+---------+----------------------------------------------+ | containers | integer | The value to set for this project's | | | | container quota. | +----------------+---------+----------------------------------------------+ | consumers | integer | The value to set for this project's | | | | consumer quota. | +----------------+---------+----------------------------------------------+ | cas | integer | The value to set for this project's | | | | CA quota. | +----------------+---------+----------------------------------------------+ Configured project quota values are specified as follows: +-------+-----------------------------------------------------------------------------+ | Value | Description | +=======+=============================================================================+ | -1 | A negative value indicates the resource is unconstrained by a quota. | +-------+-----------------------------------------------------------------------------+ | 0 | A zero value indicates that the resource is disabled. | +-------+-----------------------------------------------------------------------------+ | int | A positive value indicates the maximum number of that resource that can be | | | created for the specified project. | +-------+-----------------------------------------------------------------------------+ | | If a value is not given for a resource, this indicates that the default | | | quota should be used for that resource for the specified project. | +-------+-----------------------------------------------------------------------------+ .. _put_project_quotas_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successful request | +------+-----------------------------------------------------------------------------+ | 400 | Bad Request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ .. _delete_project_quotas: DELETE /v1/project-quotas/{uuid} ################################ Delete the project quotas configuration for the project with the requested UUID. When the project quota configuration is deleted, then the default quotas will be used for the specified project. .. _delete_project_request: Request/Response: ***************** .. code-block:: javascript Request: DELETE v1/project-quotas/{uuid} Headers: X-Auth-Token: Response: 204 No Content .. _delete_project_quotas_status_codes: HTTP Status Codes ***************** +------+-----------------------------------------------------------------------------+ | Code | Description | +======+=============================================================================+ | 204 | Successful request | +------+-----------------------------------------------------------------------------+ | 401 | Invalid X-Auth-Token or the token doesn't have permissions to this resource | +------+-----------------------------------------------------------------------------+ | 404 | Not Found | +------+-----------------------------------------------------------------------------+ barbican-6.0.0/doc/source/api/reference/secret_types.rst0000666000175100017510000002070713245511001023321 0ustar zuulzuul00000000000000************************ Secret Types - Reference ************************ Every secret in Barbican has a type. Secret types are used to describe different kinds of secret data that are stored in Barbican. The type for a particular secret is listed in the secret's metadata as the ``secret_type`` attribute. The possible secret types are: * ``symmetric`` - Used for storing byte arrays such as keys suitable for symmetric encryption. * ``public`` - Used for storing the public key of an asymmetric keypair. * ``private`` - Used for storing the private key of an asymmetric keypair. * ``passphrase`` - Used for storing plain text passphrases. * ``certificate`` - Used for storing cryptographic certificates such as X.509 certificates. * ``opaque`` - Used for backwards compatibility with previous versions of the API without typed secrets. New applications are encouraged to specify one of the other secret types. Symmetric ######### The ``symmetric`` secret type is used to store byte arrays of sensitive data, such as keys that are used for symmetric encryption. The content-type used with symmetric secrets is ``application/octet-stream``. When storing a symmetric secret with a single POST request, the data must be encoded so that it may be included inside the JSON body of the request. In this case, the content encoding of ``base64`` can be used. Example 1.1 *********** Create an encryption key for use in AES-256-CBC encryption and store it in Barbican. First, we'll see how this can be done in a single POST request from the command line using curl. .. code-block:: bash # Create an encryption_key file with 256 bits of random data dd bs=32 count=1 if=/dev/urandom of=encryption_key # Encode the contents of the encryption key using base64 encoding KEY_BASE64=$(base64 < encryption_key) # Send a request to store the key in Barbican curl -vv -H "X-Auth-Token: $TOKEN" -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{"name": "AES encryption key", "secret_type": "symmetric", "payload": "'"$KEY_BASE64"'", "payload_content_type": "application/octet-stream", "payload_content_encoding": "base64", "algorithm": "AES", "bit_length": 256, "mode": "CBC"}' \ http://localhost:9311/v1/secrets | python -m json.tool This should return a reference (URI) for the secret that was created: .. code-block:: json { "secret_ref": "http://localhost:9311/v1/secrets/48d24158-b4b4-45b8-9669-d9f0ef793c23" } We can use this reference to retrieve the secret metadata: .. code-block:: bash curl -vv -H "X-Auth-Token: $TOKEN" -H 'Accept: application/json' \ http://localhost:9311/v1/secrets/48d24158-b4b4-45b8-9669-d9f0ef793c23 | python -m json.tool The metadata will list the available content-types for the symmetric secret: .. code-block:: json { "algorithm": "AES", "bit_length": 256, "content_types": { "default": "application/octet-stream" }, "created": "2015-04-08T06:24:16.600393", "creator_id": "3a7e3d2421384f56a8fb6cf082a8efab", "expiration": null, "mode": "CBC", "name": "AES encryption key", "secret_ref": "http://localhost:9311/v1/secrets/48d24158-b4b4-45b8-9669-d9f0ef793c23", "secret_type": "symmetric", "status": "ACTIVE", "updated": "2015-04-08T06:24:16.614204" } The ``content_types`` attribute describes the content-types that can be used to retrieve the payload. In this example, there is only the default content type of ``application/octet-stream``. We can use it to retrieve the payload: .. code-block:: bash # Retrieve the payload and save it to a file curl -vv -H "X-Auth-Token: $TOKEN" \ -H 'Accept: application/octet-stream' \ -o retrieved_key \ http://localhost:9311/v1/secrets/48d24158-b4b4-45b8-9669-d9f0ef793c23/payload The ``retrieved_key`` file now contains the byte array we started with. Note that barbican returned the byte array in binary format, not base64. This is because the ``payload_content_encoding`` is only used when submitting the secret to barbican. Public ###### The ``public`` secret type is used to store the public key of an asymmetric keypair. For example, a public secret can be used to store the public key of an RSA keypair. Currently, there is only one file format accepted for public secrets: A DER-encoded ``SubjectPublicKeyInfo`` structure as defined by X.509 RFC 5280 that has been Base64 encoded with a PEM header and footer. This is the type of public key that is generated by the ``openssl`` tool by default. The content-type used with public secrets is ``application/octet-stream``. When storing a public secret with a single POST request, the contents of the file must be encoded since JSON does not accept newline characters. In this case, the contents of the file must be Base64 encoded and the content encoding of ``base64`` can be used. Example 2.1 *********** Create an RSA keypair and store the public key in Barbican. For this example, we will be using a metadata-only POST followed by a PUT. .. code-block:: bash # Create the RSA keypair openssl genrsa -out private.pem 2048 # Extract the public key openssl rsa -in private.pem -out public.pem -pubout # Submit a metadata-only POST curl -vv -H "X-Auth-Token: $TOKEN" \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{"name": "RSA Public Key", "secret_type": "public", "algorithm": "RSA"}' \ http://localhost:9311/v1/secrets | python -m json.tool This should return a reference (URI) for the secret that was created: .. code-block:: json 200 OK { "secret_ref": "http://localhost:9311/v1/secrets/cd20d134-c229-417a-a753-86432ad13bad" } We can use this reference to add the payload with a PUT request: .. code-block:: bash curl -vv -X PUT -H "X-Auth-Token: $TOKEN" \ -H 'Accept: application/json' \ -H 'Content-Type: application/octet-stream' \ --data-binary @public.pem \ http://localhost:9311/v1/secrets/cd20d134-c229-417a-a753-86432ad13bad The server should respond with a 2xx response to indicate that the PUT request was processed successfully: .. code-block:: json 204 - No Content Now we should be able to request the metadata and see the new content-type listed there: .. code-block:: bash curl -vv -H "X-Auth-Token: $TOKEN" \ -H 'Accept: application/json' \ http://localhost:9311/v1/secrets/cd20d134-c229-417a-a753-86432ad13bad | python -m json.tool .. code-block:: json { "algorithm": "RSA", "bit_length": null, "content_types": { "default": "application/octet-stream" }, "created": "2015-04-08T21:45:59.239976", "creator_id": "3a7e3d2421384f56a8fb6cf082a8efab", "expiration": null, "mode": null, "name": "RSA Public Key", "secret_ref": "http://localhost:9311/v1/secrets/cd20d134-c229-417a-a753-86432ad13bad", "secret_type": "public", "status": "ACTIVE", "updated": "2015-04-08T21:52:57.523969" } Finally, we can use the default content-type listed in ``content_types`` to retrieve the public key: .. code-block:: bash curl -vv -H "X-Auth-Token: $TOKEN" \ -H 'Accept: application/octet-stream' \ -o retrieved_public.pem \ http://localhost:9311/v1/secrets/cd20d134-c229-417a-a753-86432ad13bad/payload The ``retrieved_public.pem`` file now has the same contents as the public.pem file we started with. Example 2.2 *********** Create an RSA keypair and store the public key in Barbican. For this example we will be using a single POST request. .. code-block:: bash # Create the RSA keypair openssl genrsa -out private.pem 2048 # Extract the public key openssl rsa -in private.pem -out public.pem -pubout # Base64 encode the contents of the public key PUB_BASE64=$(base64 < public.pem) curl -vv -H "X-Auth-Token: $TOKEN" \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{"name": "RSA Public Key", "secret_type": "public", "payload": "'"$PUB_BASE64"'", "payload_content_type": "application/octet-stream", "payload_content_encoding": "base64", "algorithm": "RSA"}' \ http://localhost:9311/v1/secrets | python -m json.tool This should return a reference (URI) for the secret that was created. .. code-block:: json 200 OK { "secret_ref": "http://localhost:9311/v1/secrets/d553f0ac-c79d-43b4-b165-32594b612ad4" } barbican-6.0.0/doc/source/api/index.rst0000666000175100017510000000101213245511001017745 0ustar zuulzuul00000000000000************************** Barbican API Documentation ************************** User Guide ########## API guide docs are built to: https://developer.openstack.org/api-guide/key-manager/ API Reference ############# .. toctree:: :maxdepth: 1 ./reference/secrets.rst ./reference/secret_types.rst ./reference/secret_metadata.rst ./reference/store_backends.rst ./reference/containers.rst ./reference/acls.rst ./reference/quotas.rst ./reference/consumers.rst ./reference/orders.rst barbican-6.0.0/doc/source/configuration/0000775000175100017510000000000013245511177020224 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/configuration/plugin_backends.rst0000666000175100017510000001006213245511001024071 0ustar zuulzuul00000000000000Using Secret Store Plugins in Barbican ====================================== Summary ------- By default, Barbican is configured to use one active secret store plugin in a deployment. This means that all of the new secrets are going to be stored via same plugin mechanism (i.e. same storage backend). In **Newton** OpenStack release, support for configuring multiple secret store plugin backends is added (`Spec Link`_). As part of this change, client can choose to select preferred plugin backend for storing their secret at a project level. .. _Spec Link: https://review.openstack.org/#/c/263972 Enabling Multiple Barbican Backends ----------------------------------- Multiple backends support may be needed in specific deployment/ use-case scenarios and can be enabled via configuration. For this, a Barbican deployment may have more than one secret storage backend added in service configuration. Project administrators will have choice of pre-selecting one backend as the preferred choice for secrets created under that project. Any **new** secret created under that project will use the preferred backend to store its key material. When there is no project level storage backend selected, then new secret will use the global secret storage backend. Multiple plugin configuration can be defined as follows. .. code-block:: ini [secretstore] # Set to True when multiple plugin backends support is needed enable_multiple_secret_stores = True stores_lookup_suffix = software, kmip, pkcs11, dogtag [secretstore:software] secret_store_plugin = store_crypto crypto_plugin = simple_crypto [secretstore:kmip] secret_store_plugin = kmip_plugin global_default = True [secretstore:dogtag] secret_store_plugin = dogtag_plugin [secretstore:pkcs11] secret_store_plugin = store_crypto crypto_plugin = p11_crypto When `enable_multiple_secret_stores` is enabled (True), then list property `stores_lookup_suffix` is used for looking up supported plugin names in configuration section. This section name is constructed using pattern 'secretstore:{one_of_suffix}'. One of the plugin **must** be explicitly identified as global default i.e. `global_default = True`. Ordering of suffix and label used does not matter as long as there is a matching section defined in service configuration. .. note:: For existing Barbican deployment case, its recommended to keep existing secretstore and crypto plugin (if applicable) name combination to be used as global default secret store. This is needed to be consistent with existing behavior. .. warning:: When multiple plugins support is enabled, then `enabled_secretstore_plugins` and `enabled_crypto_plugins` values are **not** used to instantiate relevant plugins. Only above mentioned mechanism is used to identify and instantiate store and crypto plugins. Multiple backend can be useful in following type of usage scenarios. * In a deployment, a deployer may be okay in storing their dev/test resources using a low-security secret store, such as one backend using software-only crypto, but may want to use an HSM-backed secret store for production resources. * In a deployment, for certain use cases where a client requires high concurrent access of stored keys, HSM might not be a good storage backend. Also scaling them horizontally to provide higher scalability is a costly approach with respect to database. * HSM devices generally have limited storage capacity so a deployment will have to watch its stored keys size proactively to remain under the limit constraint. This is more applicable in KMIP backend than with PKCS11 backend because of plugin's different storage approach. This aspect can also result from above use case scenario where deployment is storing non-sensitive (from dev/test environment) encryption keys in HSM. * Barbican running as IaaS service or platform component where some class of client services have strict compliance requirements (e.g. FIPS) so will use HSM backed plugins whereas others may be okay storing keys in software-only crypto plugin. barbican-6.0.0/doc/source/configuration/config.rst0000666000175100017510000000021513245511001022205 0ustar zuulzuul00000000000000.. _barbican.conf: ------------- barbican.conf ------------- .. show-options:: :config-file: etc/oslo-config-generator/barbican.conf barbican-6.0.0/doc/source/configuration/troubleshooting.rst0000666000175100017510000003177713245511001024210 0ustar zuulzuul00000000000000===================================== Troubleshooting your Barbican Setup ===================================== If you cannot find the answers you're looking for within this document, you can ask questions on the Freenode IRC channel ``#openstack-barbican`` Getting a Barbican HTTP 401 error after a successful authentication to Keystone ------------------------------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^^ You get a HTTP 401 Unauthorized response even with a valid token .. code-block:: bash curl -X POST -H "X-Auth-Token: $TOKEN" -H "Content-type: application/json" \ -d '{"payload": "my-secret-here", "payload_content_type": "text/plain"}' \ http://localhost:9311/v1/secrets Caused by ^^^^^^^^^^ Expired signing cert on the Barbican server. How to avoid ^^^^^^^^^^^^^ Check for an expired Keystone signing certificate on your Barbican server. Look at the expiration date in ``/tmp/barbican/cache/signing_cert.pem``. If it is expired then follow these steps. #. On your Keystone server, verify that signing_cert.pem has the same expiration date as the one on your Barbican machine. You can normally find ``signing_cert.pem`` on your Keystone server in ``/etc/keystone/ssl/certs``. #. If the cert matches then follow these steps to create a new one #. Delete it from both your Barbican and Keystone servers. #. Edit ``/etc/keystone/ssl/certs/index.txt.attr`` and set unique_subject to no. #. Run ``keystone-manage pki_setup`` to create a new ``signing_cert.pem`` #. The updated cert will be downloaded to your Barbican server the next time you hit the Barbican API. #. If the cert **doesn't match** then delete the ``signing_cert.pem`` from your Barbican server. Do not delete from Keystone. The cert from Keystone will be downloaded to your machine the next time you hit the Barbican API. Returned refs use localhost instead of the correct hostname ------------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^^ .. code-block:: bash curl -X POST \ -H "Content-type: application/json" -H "X-Auth-Token: $TOKEN" -d \ '{"payload": "my-secret-here", "payload_content_type": "text/plain"}' \ http://myhostname.com/v1/secrets # Response: { "secret_ref": "http://localhost:9311/v1/secrets/UUID_HERE" } Caused by ^^^^^^^^^^ The default configuration on the response host name is not modified to the endpoint's host name (typically the load balancer's DNS name and port). How to avoid ^^^^^^^^^^^^^ Change your ``barbican.conf`` file's ``host_href`` setting from ``localhost:9311`` to the correct host name (myhostname.com in the example above). Barbican's tox tests fail to run on my Mac -------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^^ ``clang: error: unknown argument: '-mno-fused-madd'`` How to avoid ^^^^^^^^^^^^^ There is a `great blog article`__ that provides more details on the error and how to work around it. This link provides more details on the error and how to work around it. __ https://langui.sh/2014/03/10/wunused-command-line-argument-hard-error-in -future-is-a-harsh-mistress/ Barbican's tox tests fail to find ffi.h on my Mac ------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found ... ERROR: could not install deps [...]; v = InvocationError('...', 1) How to avoid ^^^^^^^^^^^^ Be sure that xcode and cmd line tools are up to date. Easiest way is to run ``xcode-select --install`` from an OS X command line. Be sure to say yes when asked if you want to install the command line tools. Now ``ls /usr/include/ffi/ffi.h`` should show that missing file exists, and the tox tests should run. Barbican's tox tests fail with "ImportError: No module named _bsddb" -------------------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text ImportError: No module named _bsddb How to avoid ^^^^^^^^^^^^ Running tests via tox (which uses testr) will create a .testrepository directory containing, among other things, data files. Those datafiles may be created with bsddb, if it is available in the environment. This can cause problems if you run in an environment that does not have bsddb. To resolve this, delete your .testrepository directory and run tox again. uWSGI logs 'OOPS ! failed loading app' -------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text ... spawned uWSGI master process (pid: 59190) spawned uWSGI worker 1 (pid: 59191, cores: 1) spawned uWSGI worker 1 (pid: 59192, cores: 1) Loading paste environment: config:/etc/barbican/barbican-api-paste.ini WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter \ 0x7fd098c08520 pid: 59191 (default app) OOPS ! failed loading app in worker 1 (pid 59192) :( trying again... Respawned uWSGI worker 1 (new pid: 59193) Loading paste environment: config:/etc/barbican/barbican-api-paste.ini OOPS ! failed loading app in worker 1 (pid 59193) :( trying again... worker respawning too fast !!! i have to sleep a bit (2 seconds)... ... .. note:: You will not see any useful logs or stack traces with this error! Caused by ^^^^^^^^^ The vassal (worker) processes are not able to access the datastore. How to avoid ^^^^^^^^^^^^ Check the ``sql_connection`` in your ``barbican.conf`` file, to make sure that it references a valid reachable database. "Cannot register CLI option" error when importing logging --------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text ... File ".../oslo_config/cfg.py", line 1275, in register_cli_opt raise ArgsAlreadyParsedError("cannot register CLI option") ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option Caused by ^^^^^^^^^ An attempt to call the olso.config's ``register_cli_opt()`` function after the configuration arguments were 'parsed' (see the comments and method in `the oslo.config project's cfg.py file`__ for details. __ https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py How to avoid ^^^^^^^^^^^^ Instead of calling ``import barbican.openstack.common.log as logging`` to get a logger, call ``from barbican.common import config`` with this to get a logger to use in your source file: ``LOG = config.getLogger(__name__)``. Responder raised TypeError: 'NoneType' object has no attribute '__getitem__' ---------------------------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text ... 2013-04-14 14:17:56 [FALCON] [ERROR] POST \ /da71dfbc-a959-4ad3-bdab-5ee190ce7515/csrs? => Responder raised \ TypeError: 'NoneType' object has no attribute '__getitem__' Caused by ^^^^^^^^^ Forgetting to set your non-nullable FKs in entities you create via ``XxxxResource`` classes. How to avoid ^^^^^^^^^^^^ Don't forget to set any FKs defined on an entity prior to using the repository to create it. uWSGI config issue: ``ImportError: No module named site`` --------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text ... uwsgi socket 0 bound to TCP address :9311 fd 3 Python version: 2.7.3 (...) [...] Set PythonHome to ./.venv ImportError: No module named site Caused by ^^^^^^^^^ * Can't locate the Python virtualenv for the Barbican project. * Either the 'broker' setting above is incorrect, or else you haven't started a queue process yet (such as RabbitMQ) How to avoid ^^^^^^^^^^^^ Make sure the uWSGI config file at ``etc/barbican/barbican-api-paste.ini`` is configured correctly (see installation steps above), esp. if the virtualenv folder is named differently than the ``.ini`` file has. REST Request Fails with JSON error ---------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: json { title: "Malformed JSON" } Caused by ^^^^^^^^^ Barbican REST server cannot parse the incoming JSON message from your REST client. How to avoid ^^^^^^^^^^^^ Make sure you are submitting properly formed JSON. For example, are there commas after all but the last name/value pair in a list? Are there quotes around all name/values that are text-based? Are the types of values matching what is expected (i.e. integer and boolean types instead of quoted text)? If you are using the Advanced REST Client with Chrome, and you tried to upload a file to the secrets PUT call, not only will this fail due to the multi-part format it uses, but it will also try to submit this file for every REST request you make thereafter, causing this error. Close the tab/window with the client, and restart it again. Crypto Mime Type Not Supported when I try to run tests or hit the API --------------------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ A stack trace that has this in it (for example): .. code-block:: text CryptoMimeTypeNotSupportedException: Crypto Mime Type of 'text/plain' not \ supported Caused by ^^^^^^^^^ The Barbican plugins are not installed into a place where the Python plugin manager can find them. How to avoid ^^^^^^^^^^^^ Make sure you run the ``pip install -e .``. Python "can't find module errors" with the uWSGI scripts -------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text *** has_emperor mode detected (fd: 6) *** ... !!! UNABLE to load uWSGI plugin: dlopen(./python_plugin.so, 10): image not \ found !!! ... File "./site-packages/paste/deploy/loadwsgi.py", line 22, in import_string return pkg_resources.EntryPoint.parse("x=" + s).load(False) File "./site-packages/distribute-0.6.35-py2.7.egg/pkg_resources.py", line \ 2015, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) ImportError: No module named barbican.api.app ... *** Starting uWSGI 1.9.13 (64bit) on [Fri Jul 5 09:59:29 2013] *** Caused by ^^^^^^^^^ The Barbican source modules are not found in the Python path of applications such as uwsgi. How to avoid ^^^^^^^^^^^^ Make sure you are running from your virtual env, and that pip was executed **after** you activated your virtual environment. This especially includes the ``pip install -e`` command. Also, it is possible that your virtual env gets corrupted, so you might need to rebuild it. 'unable to open database file None None' errors running scripts --------------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text ... File "./site-packages/sqlalchemy/engine/strategies.py", line 80, in connect return dialect.connect(*cargs, **cparams) File "./site-packages/sqlalchemy/engine/default.py", line 283, in connect return self.dbapi.connect(*cargs, **cparams) OperationalError: (OperationalError) unable to open database file None None [emperor] removed uwsgi instance barbican-api.ini ... Caused by ^^^^^^^^^ Destination folder for the sqlite database is not found, or is not writable. How to avoid ^^^^^^^^^^^^ Make sure the ``/var/lib/barbican/`` folder exists and is writable by the user that is running the Barbican API process. 'ValueError: No JSON object could be decoded' with Keystoneclient middleware ---------------------------------------------------------------------------- What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text ... 2013-08-15 16:55:15.759 2445 DEBUG keystoneclient.middleware.auth_token \ [-] Token validation failure. _validate_user_token \ ./site-packages/keystoneclient/middleware/auth_token.py:711 ... 2013-08-15 16:55:15.759 2445 TRACE keystoneclient.middleware.auth_token \ raise ValueError("No JSON object could be decoded") 2013-08-15 16:55:15.759 24458 TRACE keystoneclient.middleware.auth_token \ ValueError: No JSON object could be decoded ... 2013-08-15 16:55:15.766 2445 WARNING keystoneclient.middleware.auth_token \ [-] Authorization failed for token ... 2013-08-15 16:55:15.766 2445 INFO keystoneclient.middleware.auth_token \ [-] Invalid user token - rejecting request... Caused by ^^^^^^^^^ The ``keystoneclient`` middleware component is looking for a ``cms`` command in ``openssl`` that wasn't available before version ``1.0.1``. How to avoid ^^^^^^^^^^^^ Update openssl. "accept-encoding of 'gzip,deflate,sdch' not supported" ------------------------------------------------------ What you might see ^^^^^^^^^^^^^^^^^^ .. code-block:: text Secret retrieval issue seen - accept-encoding of 'gzip,deflate,sdch' not \ supported Caused by ^^^^^^^^^ This might be an issue with the browser you are using, as performing the request via curl doesn't seem to be affected. How to avoid ^^^^^^^^^^^^ Other than using an command such as curl to make the REST request you may not have many other options. barbican-6.0.0/doc/source/configuration/noauth.rst0000666000175100017510000000335513245511001022246 0ustar zuulzuul00000000000000No Auth barbican ================ As of OpenStack Newton, barbican will default to using Keystone like every other OpenStack service for identity and access control. Nonetheless, sometimes it may be useful to run barbican without any authentication service for development purposes. To this end, `barbican-api-paste.ini` contains a filter pipeline without any authentication (no auth mode): .. code-block:: ini # Use this pipeline for barbican API - DEFAULT no authentication [pipeline:barbican_api] pipeline = unauthenticated-context apiapp To enable this pipe line proceed as follows: 1. Turn off any active instances of barbican 2. Edit ``/etc/barbican/barbican-api-paste.ini`` Change the pipeline ``/v1`` value from authenticated ``barbican-api-keystone`` to the unauthenticated ``barbican_api`` .. code-block:: ini [composite:main] use = egg:Paste#urlmap /: barbican_version /v1: barbican_api With every OpenStack service integrated with keystone, its API requires access token to retireve certain information and validate user's information and privileges. If you are running barbican in no auth mode, you have to specify project_id instead of an access token which was retrieved from the token instead. In case of API, replace ``'X-Auth-Token: $TOKEN'`` with ``'X-Project-Id: {project_id}'`` for every API request in :doc:`../api/index`. You can also find detailed explanation to run barbican client with an unauthenticated context `here `__ and run barbican CLI in no auth mode `here `__. barbican-6.0.0/doc/source/configuration/index.rst0000666000175100017510000000027413245511001022054 0ustar zuulzuul00000000000000Setting up Barbican =================== .. toctree:: :maxdepth: 1 keystone.rst troubleshooting.rst noauth.rst audit.rst plugin_backends.rst config.rst policy.rst barbican-6.0.0/doc/source/configuration/policy.rst0000666000175100017510000000047113245511001022243 0ustar zuulzuul00000000000000.. _barbican-policy-generator.conf: ==================== Policy configuration ==================== Configuration ~~~~~~~~~~~~~ The following is an overview of all available policies in Barbican. For a sample configuration file. .. show-policy:: :config-file: ../../etc/oslo-config-generator/policy.conf barbican-6.0.0/doc/source/configuration/keystone.rst0000666000175100017510000000576413245511001022617 0ustar zuulzuul00000000000000Using Keystone Middleware with Barbican ======================================== Prerequisites -------------- To enable Keystone integration with Barbican you'll need a relatively current version of Keystone. It is sufficient if you are installing an OpenStack cloud where all services including Keystone and Barbican are from the same release. If you don't have an instance of Keystone available, you can use one of the following ways to setup your own. #. `Simple Dockerized Keystone`_ #. `Installing Keystone`_ #. An OpenStack cloud with Keystone (Devstack in the simplest case) .. _Simple Dockerized Keystone: https://registry.hub.docker.com/u/ jmvrbanac/simple-keystone/ .. _Installing Keystone: https://docs.openstack.org/keystone/latest/ install/index.html Hooking up Barbican to Keystone -------------------------------- Assuming that you've already setup your Keystone instance, connecting Barbican to Keystone is quite simple. When completed, Barbican should require a valid X-Auth-Token to be provided with all API calls except the get version call. 1. Turn off any active instances of Barbican 2. Edit ``/etc/barbican/barbican-api-paste.ini`` 1. Change the pipeline ``/v1`` value from unauthenticated ``barbican_api`` to the authenticated ``barbican-api-keystone``. This step will not be necessary on barbican from OpenStack Newton or higher, since barbican will default to using Keystone authentication as of OpenStack Newton. .. code-block:: ini [composite:main] use = egg:Paste#urlmap /: barbican_version /v1: barbican-api-keystone 2. Replace ``authtoken`` filter values to match your Keystone setup .. code-block:: ini [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_plugin = password username = {YOUR_KEYSTONE_USERNAME} password = {YOUR_KEYSTONE_PASSWORD} user_domain_id = {YOUR_KEYSTONE_USER_DOMAIN} project_name = {YOUR_KEYSTONE_PROJECT} project_domain_id = {YOUR_KEYSTONE_PROJECT_DOMAIN} auth_uri = http://{YOUR_KEYSTONE_ENDPOINT}:5000/v3 auth_url = http://{YOUR_KEYSTONE_ENDPOINT}:35357/v3 Alternatively, you can shorten this to .. code-block:: ini [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory and store Barbican's Keystone credentials in the ``[keystone_authtoken]`` section of ``/etc/barbican/barbican.conf`` .. code-block:: ini [keystone_authtoken] auth_plugin = password username = {YOUR_KEYSTONE_USERNAME} password = {YOUR_KEYSTONE_PASSWORD} user_domain_id = {YOUR_KEYSTONE_USER_DOMAIN} project_name = {YOUR_KEYSTONE_PROJECT} project_domain_id = {YOUR_KEYSTONE_PROJECT_DOMAIN} auth_uri = http://{YOUR_KEYSTONE_ENDPOINT}:5000/v3 auth_url = http://{YOUR_KEYSTONE_ENDPOINT}:35357/v3 3. Start Barbican ``{barbican_home}/bin/barbican.sh start`` barbican-6.0.0/doc/source/configuration/audit.rst0000666000175100017510000001201713245511001022051 0ustar zuulzuul00000000000000Using Audit Middleware with Barbican ==================================== Background ---------- `Audit middleware`_ is a python middleware logic which is added in service request processing pipeline via paste deploy filters. Audit middleware constructs audit event data in `CADF format`_. Audit middleware supports delivery of CADF audit events via Oslo messaging notifier capability. Based on `notification_driver` configuration, audit events can be routed to messaging infrastructure (notification_driver = messagingv2) or can be routed to a log file (notification_driver = log). Audit middleware creates two events per REST API interaction. First event has information extracted from request data and the second one has request outcome (response). .. _Audit middleware: https://docs.openstack.org/keystonemiddleware/latest/audit.html .. _CADF format: http://www.dmtf.org/sites/default/files/standards/documents/DSP2038_1.0.0.pdf Enabling Audit for API Requests ------------------------------- Audit middleware is available as part of `keystonemiddleware`_ (>= 1.6) library. Assuming a barbican deployment is already using keystone for token validation, auditing support requires only configuration changes. It has Oslo messaging library dependency as it uses this for audit event delivery. pyCADF library is used for creating events in CADF format. * Enable Middleware : `Enabling Middleware Link`_ . Change is primarily in service paste deploy configuration. * Configure Middleware : `Configuring Middleware Link`_ . Can use provided audit mapping file. If there are no custom mapping for actions or path, then related mapping values are derived from taxonomy defined in pyCADF library. .. _keystonemiddleware: https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/audit .. _Enabling Middleware Link: https://docs.openstack.org/keystonemiddleware/latest/audit.html#enabling-audit-middleware .. _Configuring Middleware Link: https://docs.openstack.org/keystonemiddleware/latest/audit.html#configure-audit-middleware .. note:: Audit middleware filter should be included after Keystone middleware’s keystone_authtoken middleware in request pipeline. This is needed so that audit middleware can utilize environment variables set by keystone_authtoken middleware. Steps ##### 1. Turn off any active instances of Barbican. #. Copy *api_audit_map.conf* to ``/etc/barbican`` directory. #. Edit ``/etc/barbican/barbican-api-paste.ini`` Replace the /v1 app pipeline from ``barbican_api`` to ``barbican-api-keystone-audit`` pipeline [pipeline:barbican-api-keystone-audit] pipeline = authtoken context audit apiapp #. Edit ``barbican.conf`` to update *notification_driver* value. #. Start Barbican ``{barbican_home}/bin/barbican.sh start`` Sample Audit Event ------------------ Following is the sample of audit event for symmetric key create request .. code-block:: json { "priority":"INFO", "event_type":"audit.http.request", "timestamp":"2015-12-11 00:44:26.412076", "publisher_id":"uwsgi", "payload":{ "typeURI":"http://schemas.dmtf.org/cloud/audit/1.0/event", "eventTime":"2015-12-11T00:44:26.410768+0000", "target":{ "typeURI":"service/security/keymanager/secrets", "addresses":[ { "url":"http://{barbican_admin_host}:9311", "name":"admin" }, { "url":"http://{barbican_internal_host}:9311", "name":"private" }, { "url":"https://{barbican_public_host}:9311", "name":"public" } ], "name":"barbican_service_user", "id":"barbican" }, "observer":{ "id":"target" }, "tags":[ "correlation_id?value=openstack:7e0fe4a6-e258-477e-a1c9-0fd0921a8435" ], "eventType":"activity", "initiator":{ "typeURI":"service/security/account/user", "name":"cinder_user", "credential":{ "token":"***", "identity_status":"Confirmed" }, "host":{ "agent":"curl/7.38.0", "address":"192.168.245.2" }, "project_id":"8eabee0a4c4e40f882df8efbce695526", "id":"513e8682f23446ceb598b6b0f5c4482b" }, "action":"create", "outcome":"pending", "id":"openstack:3a6a961c-9ada-4b81-9095-90968d896c41", "requestPath":"/v1/secrets" }, "message_id":"afc3fd93-51e9-4c80-b330-983e66962265" } `Ceilometer audit wiki`_ can be referred to identify meaning of different fields in audit event to **7 "W"s of Audit and Compliance**. .. _Ceilometer audit wiki: https://wiki.openstack.org/wiki/Ceilometer/blueprints/ support-standard-audit-formats#CADF_Model_is_designed_to_answer_all_Audit_and_Compliance_Questions barbican-6.0.0/doc/source/sample_config.rst0000666000175100017510000000035213245511001020701 0ustar zuulzuul00000000000000================================== Barbican Sample Configuration File ================================== Use the ``barbican.conf`` file to configure most Key Manager service options: .. literalinclude:: _static/barbican.conf.sample barbican-6.0.0/doc/source/conf.py0000666000175100017510000000613413245511001016644 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', # 'sphinx.ext.intersphinx', 'openstackdocstheme', 'oslo_config.sphinxext', 'oslo_config.sphinxconfiggen', 'oslo_policy.sphinxext', 'oslo_policy.sphinxpolicygen', ] config_generator_config_file = [ ('../../etc/oslo-config-generator/barbican.conf', '_static/barbican'), ] policy_generator_config_file = '../../etc/oslo-config-generator/policy.conf' sample_policy_basename = '_static/barbican' # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Barbican' copyright = u'2014, OpenStack Foundation' repository_name = 'openstack/barbican' bug_project = 'barbican' bug_tag = '' # Add any paths that contain "extra" files, such as .htaccess or # robots.txt. html_extra_path = ['_extra'] # Must set this variable to include year, month, day, hours, and minutes. html_last_updated_fmt = '%Y-%m-%d %H:%M' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' html_theme = 'openstackdocs' # html_static_path = ['static'] html_theme_options = {} # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, u'%s Documentation' % project, u'OpenStack Foundation', 'manual'), ] # Example configuration for intersphinx: refer to the Python standard library. # intersphinx_mapping = {'http://docs.python.org/': None} barbican-6.0.0/doc/source/sample_policy.rst0000666000175100017510000000140013245511001020726 0ustar zuulzuul00000000000000====================== Barbican Sample Policy ====================== The following is a sample Barbican policy file that has been auto-generated from default policy values in code. If you're using the default policies, then the maintenance of this file is not necessary, and it should not be copied into a deployment. Doing so will result in duplicate policy definitions. It is here to help explain which policy operations protect specific Barbican APIs, but it is not suggested to copy and paste into a deployment unless you're planning on providing a different policy for an operation that is not the default. The sample policy file can also be viewed in `file form <_static/barbican.policy.yaml.sample>`_. .. literalinclude:: _static/barbican.policy.yaml.sample barbican-6.0.0/doc/source/contributor/0000775000175100017510000000000013245511177017727 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/contributor/testing.rst0000666000175100017510000001172213245511001022125 0ustar zuulzuul00000000000000Writing and Running Barbican Tests ================================== As a part of every code review that is submitted to the Barbican project there are a number of gating jobs which aid in the prevention of regression issues within Barbican. As a result, a Barbican developer should be familiar with running Barbican tests locally. For your convenience we provide the ability to run all tests through the ``tox`` utility. If you are unfamiliar with tox please see refer to the `tox documentation`_ for assistance. .. _`tox documentation`: https://tox.readthedocs.org/en/latest/ Unit Tests ---------- Currently, we provide tox environments for Python 2.7 and 3.5. By default all available test environments within the tox configuration will execute when calling ``tox``. If you want to run them independently, you can do so with the following command: .. code-block:: bash # Executes tests on Python 2.7 tox -e py27 .. note:: If you do not have the appropriate Python versions available, consider setting up PyEnv to install multiple versions of Python. See the documentation regarding :doc:`/contributor/dev` for more information. .. note:: Individual unit tests can also be run, using the following commands: .. code-block:: bash # runs a single test with the function named # test_can_create_new_secret_one_step tox -e py27 -- test_can_create_new_secret_one_step # runs only tests in the WhenTestingSecretsResource class and # the WhenTestingCAsResource class tox -e py27 -- '(WhenTestingSecretsResource|WhenTestingCAsResource)' The function name or class specified must be one located in the `barbican/tests` directory. Groups of tests can also be run with a regex match after the ``--``. For more information on what can be done with ``testr``, please see: http://testrepository.readthedocs.org/en/latest/MANUAL.html You can also setup breakpoints in the unit tests. This can be done by adding ``import pdb; pdb.set_trace()`` to the line of the unit test you want to examine, then running the following command: .. code-block:: bash # Executes tests on Python 2.7 tox -e debug .. note:: For a list of pdb commands, please see: https://docs.python.org/2/library/pdb.html **Python 3.5** In order to run the unit tests within the Python 3.5 unit testing environment you need to make sure you have all necessary packages installed. - On Ubuntu/Debian:: sudo apt-get install python3-dev - On Fedora 21/RHEL7/CensOS7:: sudo yum install python3-devel - On Fedora 22 and higher:: sudo dnf install python3-devel You then specify to run the unit tests within the Python 3.5 environment when invoking tox .. code-block:: bash # Executes tests on Python 3.5 tox -e py35 Functional Tests ---------------- Unlike running unit tests, the functional tests require Barbican and Keystone services to be running in order to execute. For more information on :doc:`setting up a Barbican development environment ` and using :doc:`Keystone with Barbican `, see our accompanying project documentation. Once you have the appropriate services running and configured you can execute the functional tests through tox. .. code-block:: bash # Execute Barbican Functional Tests tox -e functional By default, the functional tox job will use ``testr`` to execute the functional tests as used in the gating job. .. note:: In order to run an individual functional test function, you must use the following command: .. code-block:: bash # runs a single test with the function named # test_secret_create_then_check_content_types tox -e functional -- test_secret_create_then_check_content_types # runs only tests in the SecretsTestCase class and # the OrdersTestCase class tox -e functional -- '(SecretsTestCase|OrdersTestCase)' The function name or class specified must be one located in the `functionaltests` directory. Groups of tests can also be run with a regex match after the ``--``. For more information on what can be done with ``testr``, please see: http://testrepository.readthedocs.org/en/latest/MANUAL.html Remote Debugging ---------------- In order to be able to hit break-points on API calls, you must use remote debugging. This can be done by adding ``import rpdb; rpdb.set_trace()`` to the line of the API call you wish to test. For example, adding the breakpoint in ``def on_post`` in ``barbican.api.controllers.secrets.py`` will allow you to hit the breakpoint when a ``POST`` is done on the secrets URL. .. note:: After performing the ``POST`` the application will freeze. In order to use ``rpdb``, you must open up another terminal and run the following: .. code-block:: bash # enter rpdb using telnet telnet localhost 4444 Once in rpdb, you can use the same commands as pdb, as seen here: https://docs.python.org/2/library/pdb.html barbican-6.0.0/doc/source/contributor/getting_involved.rst0000666000175100017510000000455713245511001024027 0ustar zuulzuul00000000000000Getting Involved =================== The best way to join the community and get involved is to talk with others online or at a meetup and offer contributions. Here are some of the many ways you can contribute to the Barbican project\: * Development and Code Reviews * Bug reporting/Bug fixes * Wiki and Documentation * Blueprints/Specifications * Testing * Deployment scripts Freenode IRC (Chat) -------------------- You can find Barbicaneers in our publicly accessible channel on `freenode`_ ``#openstack-barbican``. All conversations are logged and stored for your convenience at `eavesdrop.openstack.org`_. For more information regarding OpenStack IRC channels please visit the `OpenStack IRC Wiki`_. .. _`freenode`: https://freenode.net .. _`OpenStack IRC Wiki`: https://wiki.openstack.org/wiki/IRC .. _`eavesdrop.openstack.org`: http://eavesdrop.openstack.org/irclogs/ %23openstack-barbican/ Mailing List -------------- The mailing list email is openstack@lists.openstack.org. This is a common mailing list across the OpenStack projects. If you wish to ask questions or have a discussion related to Barbican include ``[barbican]`` in your email subject line. To participate on the mailing list\: * `Subscribe`_ to the mailing list * Browse the `mailing list archives`_ .. _`Subscribe`: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack .. _`mailing list archives`: http://lists.openstack.org/pipermail/openstack Launchpad ----------- Like other OpenStack related projects, we utilize Launchpad for our bug and release tracking. * `Barbican Launchpad Project`_ .. _`Barbican Launchpad Project`: https://launchpad.net/barbican Source Repository ------------------- Like other OpenStack related projects, the official Git repository is available on `git.openstack.org`_; however, the repository is also mirrored to GitHub for easier browsing. * `Barbican on GitHub`_ .. _`git.openstack.org`: http://git.openstack.org/cgit/openstack/barbican .. _`Barbican on GitHub`: https://github.com/openstack/barbican Gerrit -------- Like other OpenStack related projects, we utilize the OpenStack Gerrit review system for all code reviews. If you're unfamiliar with using the OpenStack Gerrit review system, please review the `Gerrit Workflow`_ wiki documentation. .. _`Gerrit Workflow`: http://docs.openstack.org/infra/manual/developers.html#development-workflow barbican-6.0.0/doc/source/contributor/plugin/0000775000175100017510000000000013245511177021225 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/contributor/plugin/secret_store.rst0000666000175100017510000001361313245511001024450 0ustar zuulzuul00000000000000.. module:: barbican.plugin.interface.secret_store =============================== Secret Store Plugin Development =============================== This guide describes how to develop a custom secret store plugin for use by Barbican. Barbican supports two storage modes for secrets: a secret store mode (detailed on this page), and a :doc:`cryptographic mode `. The secret store mode offloads both encryption/decryption and encrypted secret storage to the plugin implementation. Barbican includes plugin interfaces to a Red Hat Dogtag service and to a Key Management Interoperability Protocol (KMIP) compliant security appliance. Since the secret store mode defers the storage of encrypted secrets to plugins, Barbican core does not need to store encrypted secrets into its data store, unlike the :doc:`cryptographic mode `. To accommodate the discrepancy between the two secret storage modes, a secret store to cryptographic plugin adapter has been included in Barbican core, as detailed in :ref:`plugin-secret-store-crypto-adapter-label` section below. ``secret_store`` Module ======================= The ``barbican.plugin.interface.secret_store`` module contains the classes needed to implement a custom plugin. These classes include the ``SecretStoreBase`` abstract base class which custom plugins should inherit from, as well as several Data Transfer Object (DTO) classes used to transfer data between Barbican and the plugin. Data Transfer Objects ===================== The DTO classes are used to wrap data that is passed from Barbican to the plugin as well as data that is returned from the plugin back to Barbican. They provide a level of isolation between the plugins and Barbican's internal data models. .. autoclass:: SecretDTO .. autoclass:: AsymmetricKeyMetadataDTO Secret Parameter Objects ======================== The secret parameter classes encapsulate information about secrets to be stored within Barbican and/or its plugins. .. autoclass:: SecretType .. autoclass:: KeyAlgorithm .. autoclass:: KeySpec Plugin Base Class ================= Barbican secret store plugins should implement the abstract base class ``SecretStoreBase``. Concrete implementations of this class should be exposed to Barbican using ``stevedore`` mechanisms explained in the configuration portion of this guide. .. autoclass:: SecretStoreBase :members: Barbican Core Plugin Sequence ============================= The sequence that Barbican invokes methods on ``SecretStoreBase`` depends on the requested action as detailed next. Note that these actions are invoked via the ``barbican.plugin.resources`` module, which in turn is invoked via Barbican's API and Worker processes. **For secret storage actions**, Barbican core calls the following methods: 1. ``get_transport_key()`` - If a transport key is requested to upload secrets for storage, this method asks the plugin to provide the transport key. 2. ``store_secret_supports()`` - Asks the plugin if it can support storing a secret based on the ``KeySpec`` parameter information as described above. 3. ``store_secret()`` - Asks the plugin to perform encryption of an unencrypted secret payload as provided in the ``SecretDTO`` above, and then to store that secret. The plugin then returns a dictionary of information about that secret (typically a unique reference to that stored secret that only makes sense to the plugin). Barbican core will then persist this dictionary as a JSON attribute within its data store, and also hand it back to the plugin for secret retrievals later. The name of the plugin used to perform this storage is also persisted by Barbican core, to ensure we retrieve this secret only with this plugin. **For secret retrievals**, Barbican core will select the same plugin as was used to store the secret, and then invoke its ``get_secret()`` method to return the unencrypted secret. **For symmetric key generation**, Barbican core calls the following methods: 1. ``generate_supports()`` - Asks the plugin if it can support generating a symmetric key based on the ``KeySpec`` parameter information as described above. 2. ``generate_symmetric_key()`` - Asks the plugin to both generate and store a symmetric key based on the ``KeySpec`` parameter information. The plugin can then return a dictionary of information for the stored secret similar to the storage process above, which Barbican core will persist for later retrieval of this generated secret. **For asymmetric key generation**, Barbican core calls the following methods: 1. ``generate_supports()`` - Asks the plugin if it can support generating an asymmetric key based on the ``KeySpec`` parameter information as described above. 2. ``generate_asymmetric_key()`` - Asks the plugin to both generate and store an asymmetric key based on the ``KeySpec`` parameter information. The plugin can then return an ``AsymmetricKeyMetadataDTO`` object as described above, which contains secret metadata for each of the three secrets generated and stored by this plugin: private key, public key and an optional passphrase. Barbican core will then persist information for these secrets, and also create a container to group them. .. _plugin-secret-store-crypto-adapter-label: The Cryptographic Plugin Adapter ================================ Barbican core includes a specialized secret store plugin used to adapt to cryptographic plugins, called ``StoreCryptoAdapterPlugin``. This plugin functions as a secret store plugin, but it directs secret related operations to :doc:`cryptographic plugins ` for encryption/decryption/generation operations. Because cryptographic plugins do not store encrypted secrets, this adapter plugin provides this storage capability via Barbican's data store. This adapter plugin also uses ``stevedore`` to access and utilize cryptographic plugins that can support secret operations. barbican-6.0.0/doc/source/contributor/plugin/crypto.rst0000666000175100017510000001115613245511001023267 0ustar zuulzuul00000000000000.. module:: barbican.plugin.crypto.base ================================ Cryptographic Plugin Development ================================ This guide describes how to develop a custom cryptographic plugin for use by Barbican. Barbican supports two storage modes for secrets: a cryptographic mode (detailed on this page), and a :doc:`secret store mode `. The cryptographic mode stores encrypted secrets in Barbican's data store, utilizing a cryptographic process or appliance (such as a hardware security module (HSM)) to perform the encryption/decryption. Barbican includes a PKCS11-based interface to SafeNet HSMs. Note that cryptographic plugins are not invoked directly from Barbican core, but rather via a :doc:`secret store mode ` plugin adapter class, further described in :ref:`plugin-secret-store-crypto-adapter-label`. ``crypto`` Module ================= The ``barbican.plugin.crypto`` module contains the classes needed to implement a custom plugin. These classes include the ``CryptoPluginBase`` abstract base class which custom plugins should inherit from, as well as several Data Transfer Object (DTO) classes used to transfer data between Barbican and the plugin. Data Transfer Objects ===================== The DTO classes are used to wrap data that is passed from Barbican to the plugin as well as data that is returned from the plugin back to Barbican. They provide a level of isolation between the plugins and Barbican's internal data models. .. autoclass:: KEKMetaDTO .. autoclass:: EncryptDTO .. autoclass:: DecryptDTO .. autoclass:: GenerateDTO .. autoclass:: GenerateDTO Plugin Base Class ================= Barbican cryptographic plugins should implement the abstract base class ``CryptoPluginBase``. Concrete implementations of this class should be exposed to barbican using ``stevedore`` mechanisms explained in the configuration portion of this guide. .. autoclass:: CryptoPluginBase :members: Barbican Core Plugin Sequence ============================= Barbican invokes a different sequence of methods on the ``CryptoPluginBase`` plugin depending on the requested action. Note that these actions are invoked via the secret store adapter class ``StoreCryptoAdapterPlugin`` which is further described in :ref:`plugin-secret-store-crypto-adapter-label`. **For secret storage actions**, Barbican core calls the following methods: 1. ``supports()`` - Asks the plugin if it can support the ``barbican.plugin.crypto.base.PluginSupportTypes.ENCRYPT_DECRYPT`` operation type. 2. ``bind_kek_metadata()`` - Allows a plugin to bind an internal key encryption key (KEK) to a project-ID, typically as a 'label' or reference to the actual KEK stored within the cryptographic appliance. This KEK information is stored into Barbican's data store on behalf of the plugin, and then provided back to the plugin for subsequent calls. 3. ``encrypt()`` - Asks the plugin to perform encryption of an unencrypted secret payload, utilizing the KEK bound to the project-ID above. Barbican core will then persist the encrypted data returned from this method for later retrieval. The name of the plugin used to perform this encryption is also persisted into Barbican core, to ensure we decrypt this secret only with this plugin. **For secret decryptions and retrievals**, Barbican core will select the same plugin as was used to store the secret, and then invoke its ``decrypt()`` method, providing it both the previously-persisted encrypted secret data as well as the project-ID KEK used to encrypt the secret. **For symmetric key generation**, Barbican core calls the following methods: 1. ``supports()`` - Asks the plugin if it can support the ``barbican.plugin.crypto.base.PluginSupportTypes.SYMMETRIC_KEY_GENERATION`` operation type. 2. ``bind_kek_metadata()`` - Same comments as for secret storage above. 3. ``generate_symmetric()`` - Asks the plugin to both generate a symmetric key, and then encrypted it with the project-ID KEK. Barbican core persists this newly generated and encrypted secret similar to secret storage above. **For asymmetric key generation**, Barbican core calls the following methods: 1. ``supports()`` - Asks the plugin if it can support the ``barbican.plugin.crypto.base.PluginSupportTypes.ASYMMETRIC_KEY_GENERATION`` operation type. 2. ``bind_kek_metadata()`` - Same comments as for secret storage above. 3. ``generate_asymmetric()`` - Asks the plugin to generate and encrypt asymmetric public and private key (and optional passphrase) information, which Barbican core will persist as a container of separate encrypted secrets. barbican-6.0.0/doc/source/contributor/plugin/index.rst0000666000175100017510000000567213245511001023064 0ustar zuulzuul00000000000000========================= Plugin Developers Guide ========================= This guide describes how to develop custom plugins for use by Barbican. While Barbican provides useful plugin implementations, some OpenStack operators may require customized implementations, perhaps to interact with an existing corporate database or service. This approach also gives flexibility to operators of OpenStack clouds by allowing them to choose the right implementation for their cloud. Plugin Status ============= A Barbican plugin may be considered ``stable``, ``experimental`` or ``out-of-tree``. * A *stable* status indicates that the plugin is fully supported by the OpenStack Barbican Team * An *experimental* status indicates that we intend to support the plugin, but it may be missing features or may not be fully tested at the gate. Plugins in this status may occasionally break. * An *out-of-tree* status indicates that no formal support will be provided, and the plugin may be removed in a future release. Graduation Process ------------------ By default, new plugins proposed to be in-tree will be in the *experimental* status. To be considered *stable* a plugin must meet the following requirements: * 100% unit test coverage, including branch coverage. * Gate job that executes the functional test suite against an instance of Barbican configured to use the plugin. The gate may be a devstack gate, or a third-party gate. * Implement new features within one cycle after the new blueprint feature is approved. Demotion Process ---------------- Plugins should not stay in the *experimental* status for a long time. Plugins that stay in *experimental* for more than **two** releases are expected to move into *stable*, as described by the Graduation Process, or move into *out-of-tree*. Plugins in the *stable* status may be deprecated by the team, and moved to *out-of-tree*. Plugins that stay in the *out-of-tree* status for more than **two** releases may be removed from the tree. Architecture ============ Barbican's plugin architecture enables developers to create their own implementations of features such as secret storage and generation and event handling. The plugin pattern used defines an abstract class, whose methods are invoked by Barbican logic (referred to as Barbican 'core' in this guide) in a particular sequence. Typically plugins do not interact with Barbican's data model directly, so Barbican core also handles persisting any required information on the plugin's behalf. In general, Barbican core will invoke a variation of the plugin's ``supports()`` method to determine if a requested action can be implemented by the plugin. Once a supporting plugin is selected, Barbican core will invoke one or more methods on the plugin to complete the action. The links below provide further guidance on the various plugin types used by Barbican, as well as configuration and deployment options. .. toctree:: :maxdepth: 1 secret_store.rst crypto.rst barbican-6.0.0/doc/source/contributor/structure.rst0000666000175100017510000000301713245511001022506 0ustar zuulzuul00000000000000Project Structure ================= #. ``barbican/`` (Barbican-specific Python source files) #. ``api/`` (REST API related source files) #. ``controllers/`` (Pecan-based controllers handling REST-based requests) #. ``middleware/`` (Middleware business logic to process REST requests) #. ``cmd/`` (Barbican admin command source files) #. ``common/`` (Modules shared across other Barbican folders) #. ``locale/`` (Translation templates) #. ``model/`` (SQLAlchemy-based model classes) #. ``plugin/`` (Plugin related logic, interfaces and look-up management) #. ``resources.py`` (Supports interactions with plugins) #. ``crypto/`` (Hardware security module (HSM) logic and plugins) #. ``interface/`` (Certificate manager and secret store interface classes) #. (The remaining modules here are implementations of above interfaces) #. ``queue/`` (Client and server interfaces to the queue) #. ``client.py`` (Allows clients to publish tasks to queue) #. ``server.py`` (Runs the worker service, responds to enqueued tasks) #. ``tasks/`` (Worker-related controllers and implementations) #. ``tests/`` (Unit tests) #. ``bin/`` (Start-up scripts for the Barbican nodes) #. ``devstack/`` (Barbican DevStack plugin, DevStack gate configuration and Vagrantfile for installing DevStack VM) #. ``etc/barbican/`` (Configuration files) #. ``functionaltests`` (Functional Barbican tests) #. ``doc/source`` (Sphinx documentation) #. ``releasenotes`` (Barbican Release Notes) barbican-6.0.0/doc/source/contributor/database_migrations.rst0000666000175100017510000003255513245511001024457 0ustar zuulzuul00000000000000Database Migrations ==================== Database migrations are managed using the Alembic_ library. The consensus for `OpenStack and SQLAlchemy`_ is that this library is preferred over sqlalchemy-migrate. Database migrations can be performed two ways: (1) via the API startup process, and (2) via a separate script. Database migrations can be optionally enabled during the API startup process. Corollaries for this are that a new deployment should begin with only one node to avoid migration race conditions. Alternatively, the automatic update startup behavior can be disabled, forcing the use of the migration script. This latter mode is probably safer to use in production environments. Policy ------- A Barbican deployment goal is to update application and schema versions with zero downtime. The challenge is that at all times the database schema must be able to support two deployed application versions, so that a single migration does not break existing nodes running the previous deployment. For example, when deleting a column we would first deploy a new version that ignores the column. Once all nodes are ignoring the column, a second deployment would be made to remove the column from the database. To achieve this goal, the following rules will be observed for schema changes: 1. Do not remove columns or tables directly, but rather: a. Create a version of the application not dependent on the removed column/table b. Replace all nodes with this new application version c. Create an Alembic version file to remove the column/table d. Apply this change in production manually, or automatically with a future version of the application 2. Changing column attributes (types, names or widths) should be handled as follows: a. TODO: This Stack Overflow `Need to alter column types in production database`_ page and many others summarize the grief involved in doing these sorts of migrations b. TODO: What about old and new application versions happening simultaneously? i. Maybe have the new code perform migration to new column on each read ...similar to how a no-sql db migration would occur? 3. Transforming column attributes (ex: splitting one ``name`` column into a ``first`` and ``last`` name): a. TODO: An `Alembic example`_, but not robust for large datasets. Overview --------- *Prior to invoking any migration steps below, change to your* ``barbican`` *project's folder and activate your virtual environment per the* `Developer Guide`_. **If you are using PostgreSQL, please ensure you are using SQLAlchemy version 0.9.3 or higher, otherwise the generated version files will not be correct.** **You cannot use these migration tools and techniques with SQLite databases.** Consider taking a look at the `Alembic tutorial`_. As a brief summary: Alembic keeps track of a linked list of version files, each one applying a set of changes to the database schema that a previous version file in the linked list modified. Each version file has a unique Alembic-generated ID associated with it. Alembic generates a table in the project table space called ``alembic_version`` that keeps track of the unique ID of the last version file applied to the schema. During an update, Alembic uses this stored version ID to determine what if any follow on version files to process. Generating Change Versions --------------------------- To make schema changes, new version files need to be added to the ``barbican/model/migration/alembic_migrations/versions/`` folder. This section discusses two ways to add these files. Automatically '''''''''''''' Alembic autogenerates a new script by comparing a clean database (i.e., one without your recent changes) with any modifications you make to the Models.py or other files. This being said, automatic generation may miss changes... it is more of an 'automatic assist with expert review'. See `What does Autogenerate Detect`_ in the Alembic documentation for more details. First, you must start Barbican using a version of the code that does not include your changes, so that it creates a clean database. This example uses Barbican launched with DevStack (see `Barbican DevStack`_ wiki page for instructions). 1. Make changes to the 'barbican/model/models.py' SQLAlchemy models or checkout your branch that includes your changes using git. 2. Execute ``barbican-db-manage -d revision -m '' --autogenerate`` a. For example: ``barbican-db-manage -d mysql+pymysql://root:password@127.0.0.1/barbican?charset=utf8 revision -m 'Make unneeded verification columns nullable' --autogenerate`` 3. Examine the generated version file, found in ``barbican/model/migration/alembic_migrations/versions/``: a. **Verify generated update/rollback steps, especially for modifications to existing columns/tables** b. Remove autogenerated comments such as: ``### commands auto generated by Alembic - please adjust! ###`` c. **If you added new columns, follow this guidance**: 1. For non-nullable columns you will need to add default values for the records already in the table, per what you configured in the ``barbican.model.models.py`` module. You can add the ``server_default`` keyword argument for the SQLAlchemy ``Column`` call per `SQLAlchemy's server_default`_. For boolean attributes, use `server_default='0'` for False, or `server_default='1'` for True. For DateTime attributes, use `server_default=str(timeutils.utcnow())` to default to the current time. 2. If you add `any` constraint, please `always` name them in the barbican.model.models.py module, and also in the Alembic version modules when creating/dropping constraints, otherwise MySQL migrations might crash. d. **If you added new tables, follow this guidance**: 1. Make sure you added your new table to the ``MODELS`` element of the ``barbican/model/models.py`` module. 2. Note that when Barbican boots up, it will add the new table to the database. It will also try to apply the database version (that also tries to add this table) via alembic. Therefore, please edit the generated script file to add these lines: a. ``ctx = op.get_context()`` (to get the alembic migration context in current transaction) b. ``con = op.get_bind()`` (get the database connection) c. ``table_exists = ctx.dialect.has_table(con.engine, 'your-new-table-name-here')`` d. ``if not table_exists:`` e. ``...remaining create table logic here...`` *Note: For anything but trivial or brand new columns/tables, database backups and maintenance-window downtimes might be called for.* Manually ''''''''' 1. Execute: ``barbican-db-manage revision -m ""`` 2. This will generate a new file in the ``barbican/model/migration/alembic_migrations/versions/`` folder, with this sort of file format: ``_.py``. Note that only the first 20 characters of the description are used. 3. You can then edit this file per tutorial and the `Alembic Operation Reference`_ page for available operations you may make from the version files. **You must properly fill in the** ``upgrade()`` **methods.** Applying Changes ----------------- Barbican utilizes the Alembic version files as managing delta changes to the database. Therefore the first Alembic version file does **not** contain all time-zero database tables. To create the initial Barbican tables in the database, execute the Barbican application per the 'Via Application' section. Thereafter, it is suggested that only the ``barbican-db-manage`` command above be used to update the database schema per the 'Manually' section. Also, automatic database updates from the Barbican application should be disabled by adding/updating ``db_auto_create = False`` in the ``barbican.conf`` configuration file. **Note** : Before attempting any upgrade, you should make a full database backup of your production data. As of Kilo, database downgrades are not supported in OpenStack, and the only method available to get back to a prior database version will be to restore from backup. Via Application '''''''''''''''' The last section of the `Alembic tutorial`_ describes the process used by the Barbican application to create and update the database table space automatically. By default, when the Barbican API boots up it will try to create the Barbican database tables (using SQLAlchemy), and then try to apply the latest version files (using Alembic). In this mode, the latest version of the Barbican application can create a new database table space updated to the latest schema version, or else it can update an existing database table space to the latest schema revision (called ``head`` in the docs). *To bypass this automatic behavior, add* ``db_auto_create = False`` *to the* ``barbican.conf`` *file*. Manually ''''''''' Run ``barbican-db-manage -d upgrade -v head``, which will cause Alembic to apply the changes found in all version files after the version currently written in the target database, up until the latest version file in the linked chain of files. To upgrade to a specific version, run this command: ``barbican-db-manage -d upgrade -v ``. The ``Alembic-ID-of-version`` is a unique ID assigned to the change such ``as1a0c2cdafb38``. Downgrade ''''''''' Upgrades involve complex operations and can fail. Before attempting any upgrade, you should make a full database backup of your production data. As of Kilo, database downgrades are not supported, and the only method available to get back to a prior database version will be to restore from backup. You must complete these steps to successfully roll back your environment: 1. Roll back configuration files. 2. Restore databases from backup. 3. Roll back packages. Rolling back upgrades is a tricky process because distributions tend to put much more effort into testing upgrades than downgrades. Broken downgrades often take significantly more effort to troubleshoot and resolve than broken upgrades. Only you can weigh the risks of trying to push a failed upgrade forward versus rolling it back. Generally, consider rolling back as the very last option. The backup instructions provided in `Backup tutorial`_ ensure that you have proper backups of your databases and configuration files. Read through this section carefully and verify that you have the requisite backups to restore. **Note** : The backup tutorial reference file only updated to Juno, DB backup operation will be similar for Kilo. The link will be updated when the reference has updated. For more information and examples about downgrade operation please see `Downgrade tutorial`_ as reference. TODO Items ----------- 1. *[Done - It works!]* Verify alembic works with the current SQLAlchemy model configuration in Barbican (which was borrowed from Glance). 2. *[Done - It works, I was able to add/remove columns while app was running]* Verify that SQLAlchemy is tolerant of schema miss-matches. For example, if a column is added to a table schema, will this break existing deployments that aren't expecting this column? 3. *[Done - It works]* Add auto-migrate code to the boot up of models (see the ``barbican\model\repositories.py`` file). 4. *[Done - It works]* Add guard in Barbican model logic to guard against running migrations with SQLite databases. 5. Add detailed deployment steps for production, so how new nodes are rolled in and old ones rolled out to complete move to new versions. 6. *[In Progress]* Add a best-practices checklist section to this page. a. This would provide guidance on safely migrating schemas, do's and don'ts, etc. b. This could also provide code guidance, such as ensuring that new schema changes (eg. that new column) aren't required for proper functionality of the previous version of the code. c. If a server bounce is needed, notification guidelines to the devop team would be spelled out here. .. _Alembic: https://alembic.readthedocs.org/en/latest/ .. _Alembic Example: https://julo.ch/blog/migrating-content-with-alembic/ .. _Alembic Operation Reference: https://alembic.readthedocs.org/en/latest/ops.html .. _Alembic tutorial: https://alembic.readthedocs.org/en/latest/tutorial.html .. _Barbican DevStack: https://docs.openstack.org/barbican/latest/contributor/devstack.html .. _Developer Guide: https://github.com/cloudkeep/barbican/wiki/Developer-Guide .. _Need to alter column types in production database: http://stackoverflow.com/questions/5329255/need-to-alter-column-types-in-production-database-sql-server-2005 .. _OpenStack and SQLAlchemy: https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Migrations .. _What does Autogenerate Detect: http://alembic.readthedocs.org/en/latest/autogenerate.html#what-does-autogenerate-detect-and-what-does-it-not-detect .. _SQLAlchemy's server_default: http://docs.sqlalchemy.org/en/latest/core/metadata.html?highlight=column#sqlalchemy.schema.Column.params.server_default .. _Backup tutorial: http://docs.openstack.org/openstack-ops/content/upgrade-icehouse-juno.html#upgrade-icehouse-juno-backup .. _Downgrade tutorial: http://docs.openstack.org/openstack-ops/content/ops_upgrades-roll-back.html barbican-6.0.0/doc/source/contributor/index.rst0000666000175100017510000000125713245511001021561 0ustar zuulzuul00000000000000======================= Barbican for Developers ======================= If you're new to OpenStack development you should start by reading the `OpenStack Developer's Guide`_. .. _`OpenStack Developer's Guide`: https://docs.openstack.org/infra/manual/developers.html Once you've read the OpenStack guide you'll be ready to set up a local barbican development environment. .. toctree:: :maxdepth: 1 dev.rst devstack.rst When you're ready to dive deeper in to barbican take a look at: .. toctree:: :maxdepth: 1 getting_involved.rst architecture.rst structure.rst dataflow.rst dependencies.rst database_migrations.rst plugin/index.rst testing.rst barbican-6.0.0/doc/source/contributor/devstack.rst0000666000175100017510000000671413245511001022261 0ustar zuulzuul00000000000000Running Barbican on DevStack ============================ Barbican is currently available via the plugin interface within devstack We provide two ways of deploying a DevStack environment with a running Barbican. The easy mode uses vagrant and automatically creates the VM with all necessary dependencies to run DevStack. It is recommended to use this process if it is your first time. If you are familiar with DevStack you can use the steps in the manual setup section to install Barbican onto your already running DevStack installation. .. warning:: This process takes anywhere from 10-30 minutes depending on your internet connection. Easy Mode --------- To simplify the setup process of running Barbican on DevStack, there is a Vagrantfile that will automatically setup up a VM containing Barbican running on Devstack. .. warning:: Upon following these steps, you will not be able to use tox tools if you setup a shared folder. This is because making hard-links is required, but not permitted if the project is in a shared folder. If you wish to use tox, comment out the `Create Synced Folder` section in `barbican/devstack/barbican-vagrant/Vagrantfile`. 1. Obtain Barbican vagrant file If you don't already have the file then clone the repo below .. code-block:: bash git clone https://github.com/openstack/barbican.git 2. Move the ``barbican-vagrant`` directory outside of the Barbican directory and into your current directory for vagrant files. If you do not have one, then just copy it into your home directory. .. code-block:: bash cp -r barbican/devstack/barbican-vagrant 3. Get into the ``barbican-vagrant`` directory .. code-block:: bash cd barbican-vagrant 4. Start create a new VM based on the cloned configuration .. code-block:: bash vagrant up 5. Once the VM has been successfully started and provisioned, ssh into the VM. .. code-block:: bash vagrant ssh 6. Once inside the VM, change your directory to the ``devstack`` folder. .. code-block:: bash cd /opt/stack/devstack/ 7. Start DevStack .. code-block:: bash ./stack.sh Manual Setup ------------ These steps assume you are running within a clean Ubuntu 14.04 virtual machine (local or cloud instance). If you are running locally, do not forget to expose the following ports #. Barbican - ``9311`` #. Keystone API - ``5000`` #. Keystone Admin API - ``35357`` Installation ^^^^^^^^^^^^ 1. Make sure you are logged in as a non-root user with sudo privileges 2. Install git .. code-block:: bash sudo apt-get install git 3. Clone DevStack .. code-block:: bash git clone https://github.com/openstack-dev/devstack.git 4. Add the Barbican plugin to the local.conf file and verify the minimum services required are included. You can pull down a specific branch by appending the name to the end of the git URL. If you leave the space empty like below, then origin/master will be pulled. .. code-block:: ini enable_plugin barbican https://git.openstack.org/openstack/barbican enable_service rabbit mysql key If this is your first time and you do not have a local.conf file, there is an example in the `Barbican github `_. Copy the file and place it in the devstack/ directory. 5. Start DevStack .. code-block:: bash cd devstack/ ./stack.sh barbican-6.0.0/doc/source/contributor/dependencies.rst0000666000175100017510000000215113245511001023072 0ustar zuulzuul00000000000000Adding/Updating Dependencies ============================ Adding new Dependency --------------------- If you need to add a new dependency to Barbican, you must edit a few things: #. Add the package name (and minimum version if applicable) to the requirements.txt file in the root directory. .. note:: All dependencies and their version specifiers must come from the OpenStack `global requirements`_ repository. #. We support deployment on CentOS 6.4, so you should check CentOS + EPEL 6 yum repos to figure out the name of the rpm package that provides the package you're adding. Add this package name as a dependency in ``rpmbuild/SPECS/barbican.spec``. #. If there is no package available in CentOS or EPEL, or if the latest available package's version is lower than the minimum required version we must build an rpm for it ourselves. Add a line to ``rpmbuild/package_dependencies.sh`` so that jenkins will build an rpm using fpm and upload it to the cloudkeep yum repo. .. _`global requirements`: https://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt barbican-6.0.0/doc/source/contributor/dev.rst0000666000175100017510000000717013245511001021230 0ustar zuulzuul00000000000000Setting up a Barbican Development Environment ============================================== These instructions are designed to help you setup a standalone version of Barbican which uses SQLite as a database backend. This is not suitable for production due to the lack of authentication and an interface to a secure encryption system such as an HSM (Hardware Security Module). In addition, the SQLite backend has known issues with thread-safety. This setup is purely to aid in development workflows. .. warning:: The default key store implementation in Barbican **is not secure** in any way. **Do not use this development standalone mode to store sensitive information!** Installing system dependencies ------------------------------ **Ubuntu 15.10:** .. code-block:: bash # Install development tools sudo apt-get install git python-tox # Install dependency build requirements sudo apt-get install libffi-dev libssl-dev python-dev gcc **Fedora 23:** .. code-block:: bash # Install development tools sudo dnf install git python-tox # Install dependency build requirements sudo dnf install gcc libffi-devel openssl-devel redhat-rpm-config Setting up a virtual environment -------------------------------- We highly recommend using virtual environments for development. You can learn more about `Virtual Environments`_ in the Python Guide. If you installed tox in the previous step you should already have virtualenv installed as well. .. _Virtual Environments: http://docs.python-guide.org/en/latest/dev/virtualenvs/ .. code-block:: bash # Clone barbican source git clone https://git.openstack.org/openstack/barbican cd barbican # Create and activate a virtual environment virtualenv .barbicanenv . .barbicanenv/bin/activate # Install barbican in development mode pip install -e $PWD Configuring Barbican -------------------- Barbican uses oslo.config for configuration. By default the api process will look for the configuration file in ``$HOME/barbican.conf`` or ``/etc/barbican/barbican.conf``. The sample configuration files included in the source code assume that you'll be using ``/etc/barbican/`` for configuration and ``/var/lib/barbican`` for the database file location. .. code-block:: bash # Create the directories and copy the config files sudo mkdir /etc/barbican sudo mkdir /var/lib/barbican sudo chown $(whoami) /etc/barbican sudo chown $(whoami) /var/lib/barbican cp -r etc/barbican /etc All the locations are configurable, so you don't have to use ``/etc`` and ``/var/lib`` in your development machine if you don't want to. Running Barbican ---------------- If you made it this far you should be able to run the barbican development server using this command: .. code-block:: bash bin/barbican-api An instance of barbican will be listening on ``http://localhost:9311``. Note that the default configuration uses the unauthenticated context. This means that requests should include the ``X-Project-Id`` header instead of including a keystone token in the ``X-Auth-Token`` header. For example: .. code-block:: bash curl -v -H 'X-Project-Id: 12345' \ -H 'Accept: application/json' \ http://localhost:9311/v1/secrets For more information on configuring Barbican with Keystone auth see the :doc:`Keystone Configuration ` page. Building the Documentation -------------------------- You can build the html developer documentation using tox: .. code-block:: bash tox -e docs Running the Unit Tests ---------------------- You can run the unit test suite using tox: .. code-block:: bash tox -e py27 barbican-6.0.0/doc/source/contributor/architecture.rst0000666000175100017510000000722313245511001023133 0ustar zuulzuul00000000000000Architecture ============ This document describes the architecture and technology selections for Barbican. In general, a goal is to utilize the OpenStack architecture and technology selections as much as possible. An overall architecture is presented first, followed by technology selection details to implement the system. Overall Architecture -------------------- The next figure presents an overall logical diagram for Barbican. .. image:: ./../images/barbican-overall-architecture.gif The API node(s) handle incoming REST requests to Barbican. These nodes can interact with the database directly if the request can be completed synchronously (such as for GET requests), otherwise the queue supports asynchronous processing by worker nodes. The latter could include interactions with third parties such as certificate authorities. As implied in the diagram, the architecture supports multiple API and worker nodes being added/removed to/from the network, to support advanced features such as auto scaling. Eventually, the database could be replicated across data centers supporting region-agnostic storage and retrieval of secured information, albeit with lags possible during data synchronization. Technology Selection -------------------- In general, components from the `Oslo `_ commons project are used within Barbican, such as config, messaging and logging. The next figure examines the components within Barbican. .. image:: ./../images/barbican-components.gif Several potential clients of the Barbican REST interface are noted, including `Castellan `_ which presents a generic key management interface for other OpenStack projects with Barbican as an available plugin. The API node noted in the previous section is a WSGI server. Similar to OpenStack projects such as `Glance `_ it utilizes paste to support configurable middleware such as to interface with `Keystone `_ for authentication and authorization services. `Pecan `_ (a lean Python web framework inspired by CherryPy, TurboGears, and Pylons) is utilized to map resources to REST routes. These resources contain the controller business logic for Barbican and can interface with encryption/decryption processes (via crypto components), datastore (via repository components) and asynchronous tasks (via queue components). The crypto components provide a means to encrypt and decrypt information that accommodates a variety of encryption mechanisms and cryptographic backends (such as key management interoperability protocol (KMIP) or hardware security module (HSM)) via a plugin interface. The repository components provide an interface and database session context for the datastore, with model components representing entities such as Secrets (used to store encrypted information such as data encryption keys). `SQLAlchemy `_ is used as the object relational model (ORM) layer to the database, including `MySQL `_ and `PostgreSQL `_. For asynchronous processing, `Oslo Messaging `_ is used to interact with the queue, including `RabbitMQ `_. The worker node processes tasks from the queue. Task components are similar to API resources in that they implement business logic and also interface with the datastore and follow on asynchronous tasks as needed. These asynchronous tasks can interface with external systems, such as certificate authorities for SSL/TLS certificate processing. barbican-6.0.0/doc/source/contributor/dataflow.rst0000666000175100017510000000733313245511001022254 0ustar zuulzuul00000000000000Dataflow ======== Bootup flow when the Barbican API service begins ------------------------------------------------ This is the sequence of calls for booting up the Barbican API server: #. ``bin/barbican.sh start``: Launches a WSGI service that performs a PasteDeploy process, invoking the middleware components found in ``barbican/api/middleware`` as configured in ``etc/barbican/barbican-api-paste``. The middleware components invoke and then execute the Pecan application created via ``barbican/api/app.py:create_main_app()``, which also defines the controllers (defined in ``barbican/api/controllers/``) used to process requested URI routes. Typical flow when the Barbican API executes ------------------------------------------- For **synchronous** calls, the following sequence is generally followed: #. A client sends an HTTP REST request to the Barbican API server. #. The WSGI server and routing invokes a method on one of the ``XxxxController`` classes in ``barbican/api/controllers/xxxx.py``, keyed to an HTTP verb (so one of POST, GET, DELETE, or PUT). #. Example - GET /secrets: #. In ``barbican/api/controllers/secrets.py``, the ``SecretController``'s ``on_get()`` is invoked. #. A ``SecretRepo`` repository class (found in ``barbican/model/respositories.py``) is then used to retrieve the entity of interest, in this case as a ``Secret`` entity defined in ``barbican/model/models.py``. #. The payload is decrypted as needed, via ``barbican/plugin/resources.py``'s ``get_secret()`` function. #. A response JSON is formed and returned to the client. For **asynchronous** calls, the following sequence is generally followed: #. A client sends an HTTP REST request to the Barbican API server. #. The WSGI server and routing again invokes a method on one of the ``XxxxcController`` classes in ``barbican/api/controllers/``. #. A remote procedure call (RPC) task is enqueue for later processing by a worker node. #. Example - POST /orders: #. In ``barbican/api/controllers/orders.py``, the ``OrdersController``'s ``on_post()`` is invoked. #. The ``OrderRepo`` repository class (found in ``barbican/model/respositories.py``) is then used to create the ``barbican/model/models.py``'s ``Order`` entity in a 'PENDING' state. #. The Queue API's ``process_type_order()`` method on the ``TaskClient`` class (found in ``barbican/queue/client.py``) is invoked to send a message to the queue for asynchronous processing. #. A response JSON is formed and returned to the client. #. The Queue service receives the message sent above, invoking a corresponding method on ``barbican/queue/server.py``'s ``Tasks`` class. This method then invokes the ``process_and_suppress_exceptions()`` method on one of the ``barbican/tasks/resources.py``'s ``BaseTask`` implementors. This method can then utilize repository classes as needed to retrieve and update entities. It may also interface with third party systems via plugins`. The ``barbican/queue/client.py``'s ``TaskClient`` class above may also be invoked from a worker node for follow on asynchronous processing steps. #. Example - POST /orders (continued): #. Continuing the example above, the queue would invoke the ``process_type_order()`` method on ``barbican/queue/server.py``'s ``Tasks`` class. Note the method is named the same as the ``TaskClient`` method above by convention. #. This method then invokes ``process_and_suppress_exceptions()`` on the ``barbican/tasks/resources.py``'s ``BeginTypeOrder`` class. This class is responsible for processing all newly-POST-ed orders. barbican-6.0.0/doc/source/images/0000775000175100017510000000000013245511177016622 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/images/barbican-components.gif0000666000175100017510000015452113245511001023231 0ustar zuulzuul00000000000000GIF87a}     C .[5"{+ C"""$1$$&))[*+;+7++,,,$-./Iq033%4H455,66{:;;+Vaeobecn RGB profilGeneri ki RGB profilPerfil RGB genricPerfil RGB Genrico030;L=89 ?@>D09; RGBProfil gnrique RVBu( RGB r_icϏProfilo RGB genericoGenerisk RGB-profil| RGB \ |Obecn RGB profil RGB Allgemeines RGB-Profilltalnos RGB profilfn RGB cϏeNN, RGB 000000Profil RGB generic  RGBPerfil RGB genricoAlgemeen RGB-profielB#D%L RGB 1H'DGenel RGB ProfiliYleinen RGB-profiiliUniwersalny profil RGB1I89 ?@>D8;L RGBEDA *91JA RGB 'D9'EGeneric RGB ProfileGenerel RGB-beskrivelsetextCopyright 2007 Apple Inc., all rights reserved.XYZ RXYZ tM=XYZ Zus4XYZ (6curvsf32 B&l, H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵׯ`ÊKٳhӪ]˶۷pʝKݻx˷߿ LÈ+^̸ǐ#KL˘3k̹ϠCMӨS^ͺװc˞M۸sͻ Nȓ+_μУK^gسkνËO;|f ˟O??pZ$ś& 6F( b@6φܳDa<x=SM4+F/X.8,X#4x#-hc: #6H3"mls&lʖlR$\?U(T%XVr 9ighPx|{2z:$H9#A#K%M8f5R/ +ZH*,ꜛj뭸 #.JYWV-bl0 (r ;I,|^Ͷ+\Cijj-^(:>ϣ ɪ2 s˾ ̿,l'jeZ6V6=5,2,B(+&.^Uނ]~)˳ +|Y (le21 #} 4M+4QGFN%W2 ,_ K% 6(|rX=6("ͥ^)єHP< }_M0$obѐ+.yO/k>)9Gg>_z{uy݃1ܲ t(6ʶT2sl\CYrYs%lÎm|{K%|޾ ʀO~J7|L0M{]!,b`5jCƪ&vTBy,NfpǀVq MdP~.`Mx֠ 'D& :oۡ(CZyc Qƍop+^%W*^ъ=51VcŴgqOLA0`t.ȸ-iHl|/gB@ & #n@W|7X'pG"/H~D"WV"pG vZ򖸴mGm 0I`6zGC!f@Z d! ^x '/ p(xK<`y}Gfzԣ=.ڂ=4HAQH;PҌt}yIk/}ѵ] d2ۻuƭn^aYָn&泸p~~6w#s.9cȎ-JU+Z+b( Z)ȵvϺN ;61qd<[#n ?sN]M#d@7{ G;he9 |C[з + О9bqp7;8,tճqܣԧ>Rfĸ$Ϻַk~@XXhG;<´1m}k9>}{Gl͉ms{ۭw񐏼ۭăߦ7okAGOқOAt!:K>8 O7m^4o~=q|Lި;>k||#^w8p/ڣ}'џ|%Mχm _ggggs (nV m؁ "] S`&Ղ. 1( 00hRb 78X < -/1&eG8@̀sEv1 9p-(7/Z.\b7 q phj|Vmpr8tXvxxzX适gA@ǀ& `Jx؂Ӈ5xpD70phB&&A͘X ` q qXg@؏9Ifq P hRvրP؂~E{IʣG{FڤO* @!Z\ڥ Jf5PijXg|jj5*0 Pf\ڨʕpY꥔Z ɀĐP|:B x`pp&] ʨ  j*p Pڬ 0%@ &6 2 ^:ZXX Ү:ɚ pZ p[+k ۰ۯ K  ɐ AA |po5 0 ڲ.6 Z6{+G @B;D[F;(&8~5T[Z >pz \˵9P pf{h jj`K;0rS|۷Yٜ& 0ٷ+˷kK۷ @{/p0"4+4[S+(U0{{[S9K? d˶;[{țʻۼ+ 0 ˼ @99j=yր图J2(k 33[U? P <\; _) P ̶` r  0 9 B-{ ްwހY p',̧ 9>4M  MpI8SA/sM;P_`[ ^ Ż^̋]< ɋe|p ] x& f G} (z [ P% t P @0O_Cm4 H ` 0 ]<,0P v7 x0. ANw,.pg ֥̽ڜ J m 0%Ǻ )h۸]S nk ۠j0 /W( p0h "0/z/~\芞`萾8  ݊.pS"qs:p|^%B7V]| ^}u sp|ƍ襐m ~0 a xp0 eC Xp V9@spb~8 `I0 Ƃs dJ09I~bRH`??  "?򳠠pV7`p y.EW'x"E7VxCAJ,q%I_ WM9uxZ %ZQ)c߮P|hV\\;`߹V÷_ &DH!C*5 <ȁ白|4\Ȋ~&]iԩUfdfNM`&~LhoFkF{rO@k Į8r'`~Jkv9];ǂj6kS3΁JaYlsO>yO?tP=tPC#tQB AqRIR2ʹR0SM+gvy8 +ANԟw4E7R VV].RDU(PM- L5aG1DY@aD"$3"'I"ÃRvu8K'S^|w_~GHZj}ZZr`xcl+dC<ɓOO 4e[ecYRx)'O$tg{؝ET%KHMe%E2ӳv~:S!WgVvmjiZv|vgIYDylc"IrM6jV|qp9E6MR;d9|״6_*;{V3W^;/W}dB%'Cݖ{P5qrm5o;ӄuL]F{jb{v{R_nbY vc!B@dgt\OOk\T\|$<ʭN#$a Eغ"n%S6anO}h;VafR(6Nby0>,)QhǼ$F1Y"T)/mKY&Ň3&T1 =" ~#F9c3H Rl6xY"(E+LHHFqXK.`C[7Ѓ-Ta1H6a +LDlAQz-MIK[.n]zȧ ҂y)< &fSc6Mpj g981SD')Gh NγX +:?#xF "cD^~L6P $"aėF9f娐B/+*̇`RZĔ 1 06ASR` h;ZBN{ KԔ DRzTCT,L%KJ X4f=kZъhF4-:8zyk&:ȿLS%laEWnlR IM"j X.ZV#8W9@ Á'W=dm|i%nwk^oJ6kp!!af}KV0%`VBxcVrx`^fwc½_8g\`D'`GXp-|a /6[F1>FE%cdvM=H+{qx "Ֆ#@B c l"#{*Eϒ#39OV2''CKLECz_z0`U &k5o B+g&׿f~k_>H"t=a~~e WVCuS{:+62Ӣ50|br8t܏dfvlhGIRPL3;dY;ikiZg_;]X# C>A, F<01> "F?$LB%\B&l®6AЄLiC9JJB0 0BB1L4$C2TC0$C F`۾ 3*" lP8tH#b<x`#BJT|XAMDOb 4 &'0@ؾK{NL#Lþj锄 3@hL*xj.V \T yQJp]"M< [dM Pe,ls1yXPmP}PP P P ͌9+"\xSQKȼgLQ=@;=@K{ÜC̼ 8QZ1E:SppptsٱpDFP*I"In:-‘̦vҞRr'-0%2SD3SRplʜ4;SS?S@ T;RɳPŽ<0lNCѓL4\t (˫#ҔpIApX r$TY-Tb&" ,mZF^L_Uu`mJȆUde IsrKX;=l9@39+39^lQF]kř Ox`SURd QJdOPPѐ=V-2H] gxXXXXXXp};LK=j%5/DQ L´IB0 TIٻ): Fpw5t1Ty|L{(pѣMV50zbiZ08!@8[M[][m[}[[r yyD0[4TŽCVCp@~$tp4yU5d׺ue%-grPb֮-] QcG(LxcHjٝ]ڭ]۽]]]pmy!6-dCIM5}?< LP\qM̐\qV >* ^dSMP|4]Ӄ 3#C89MG(\xhjj&`.`Fn`6`N`~`` ~ Nn)E^d^tCfQ409ML:5BgE>DM.%e>KCSnRs:ʜhp5( We@^4Jf=YK6-ՍbQ=eX3^eVneW> F>G>ʆW8V֖uc1cO4JSp"U𱟓0e$Hޫ2&2DC6dJ^fn&L8\WO~x-Quc@Tg}g~gghg\xgfvypxW0> ­cCTKQF,-N$eN͆ȐvdX\șJ؁ȨQ'fCxSa(_WHj>v[>a@e|~jj^gW0v;x^MQQL FQE3UL Ւ^y_8w?ȩݵlpW = \ uJid#v.m>m6m穎^>^mNmFmnmnmm6ΑI ش &LNT=|H=ŰŰb(iƶ1(S{ђ0=O\M"ivCѾm?S "h6pWO8-ahG(m p/NoVPl@NNNklL hfkI8`xףe׸Oe) F~oZ&ZSNŌ~p0GBm^p׃x/`?B(s/7&Ѐ>hҞ5'rQszгٔECglNuΘ/ \ڙ栝tO8GW̨ OLhFJ'$܁|\ȅ1)ϵfh< V ^ ^ SG"h//HvK)cO[Q8p w_?wtsQWRqF/ \Z_wOx~g %Zx ɕt\O uAb6(YAeZ+ԫzw9 OW8==[^/(v!|O"x>yQGC@9GOynxv6piECI`f$UP:(qMOp՜pp[~'60(EOס$2k6I ٨tvwubЀBzЃ hGKЃ +66y &w".o}''GW.Fy{|[[`Iט#<`ܟtl(:5i.MCEn&w'gr͇g`g"`"O+ =|=U2[=U BdQm"VaX!ƌ7 "G,i$ʔ*Wl%LsW\6gNy$*y: X6pJ)ԨRJ:Ν~1rui Mc֮jײm-ܘ @9w‚ko=yܥKW0rɳg/_~4plY o4aB+Dn֎ ˖#+AڴCX.E7}k82i3|3gҟAO N1U4J]ͪ~Pu2_ӯo~[]r{ActiW> :c Bf`Hi ,Th^!zءT0 X2A 0TBPIh&0Q *b[ eH!L@",70p!}!->!7~Yj% VXbcTN6XxRIBT.ٖcHC{u :(pi;엕=uJ8%Z!(n2%(R饟:ꨛ0e 2_bɛAAH#z 1rbϪm峗dFZ, `X=夤z-m%I{.~QkN);z6~n?,Ͽhl?)'O-7={=?>僯ҫ~!(!Q4<"ab,1`leaя-Day(1dw:yz6䰌bGrp@;~Ğ;HG =/ac0Adr0GBGaw`]R|@_JAq9p$)w)at@6(y'p&1y< ~4mqt *EbI1,Y0 !| ljLJF('<1'@گlLB=(BЅ2 6xCEэ"th@Y%?#[!=)JRhA .})Lc*әҴ6rӛbBŋxFԥ,T*թRV*VխrSSҏ5fn}+\*׹u{F:.A~+`+]=,bZ5^CVD_0B(V?$/m+]C+ђV~`C`(3H3-nsV_`,p+la $ v8`+Rֽ.vSvy~pk9H # gvӛ y<x/|+ҷ/~W,K%F IVAIի3N,FOhbݚ0C,> Dp a]~1cgĠ0dCvGG,!(1J˱c_WՃ,)9A9pO<T.!1X{\29gV6v3# #-8.^41]'#W"Dð*bBbC&d`9F|/bFpF|xBhd0bI6/bDZ<J-x$HzFA dGh%@ T!؂h;;^TCO0C [}"$W2+A:ҫ+2ZCZ%T$TX-B F@AIC̥-X(@]>#J%^jb7h#(XN%;ڂ;`VTF+"u[VtFk&l&B-5;n&oo&pp'qf49C< s>'tFZ%T&߈TT@ ^~PB$`B&h@ vngQhdvڤ@4C(T( l*h&Lv& lEG$ 0`}E0蔃>NEA4V^(fn(v~NCJ3ă+T#B(Ψ^Tꨏhh )^T#B! khC EC)T.B 80PCWQC2&.*U}Qht=#\n*v~**^!x\#=p$N*es@h檮 )rU~TE(֡2>+"V!ZHRs^Z bW^,flT 4+ɖvBt1Fb魎,g,Z bR*ڃ"4.n .~H0@^ S:`nFN/ ڪlZ.Rn!mk}EѪM,&/Ư/֯//&4(,M@R$P:/V/7PVlΪC%+&A$sp~wwpږ0oj0:A%L)zd)+0Sip2J% t,P?$p,7?quž$%\go1w1gvX> g~qqT$Q஢4AY|YٛY٦ 2|-2$!!w#']2Ea$ )2*r2B2q^)r(ر0!' C2/33733{rzsC%p+_36_sA6k1T«GK<\C4s:T:C;3TU5d&m=ó:Ts<[,;KUs@s?_WLܔmQ t.C+3H4IwrH1).$*CEBt )C!XN'A rtI!`mFRnSӗ@UWU_5VwL5VGT14~.J%2rI[AsJD.V`Os44_7 v8 #9{=dWo51\6fgfo1Lh'E |!AQ'DAZBZk6lǶl6nnlvm6ppm';'DM$^L#(D@uCOuP3Qm|L6yOuzazIaD"@=+"t17xw =8h7`K7x0 vt@۴s0 t dM2bC#3ccx7n 9DA?EP8\R?@T/1#ޑ*F5KlkqCA_s8;48(@4x @xp9LxH+4(@P@vB+cs_yxCgY:EqcЪs&f纥%+`̝AC*D2z%[9Ǻz{p:d&Qz ;x/09hzյz3-'zz;$AO=[y${Q;gH{[ET[C!>E'@Ɵ%BB/|z/X'ޡX > Bd+?)" 'Ϡ_)sIG>7@׺x>sW#_< ĕBP4xaB\,bD)F=9n'e#I4yeJ+Yt/QxF?]f>B=WiRK6u"?>7WV[v Y4@ n6,y2Ƅ6*0PC1&x )+Z_ohEDV/`_X;WիI6GFBݻà94("Lwfz1cG)2fcˎ_ҽIg5a:j@{rŻ\. 6 + /JC FD^H3 0T J%+H zjbI`묵ޚ뮽zX~c?ƒ'edv|L(۩k#[py xܖFY/$ `eQlaA %o!dv&aGI޸g;|*ꫠߞR[HH|W{X6{JܜiO{q TPc;܆,9Q%[E[-cgX7{3ďл0`=VHta <Ap:ǺQP%J@~#BZQ( x!~?:Ђ\a3L" z5іb/},.x3y/> >!EЅTf6$[ jD'Gt3j$$6"yk&SH2(DjLC4i :h̡! 1"OxWӴR7X(-!\ui/oH-SM-#T?uAoGO@ePDT*6(W4.:l\3'- iпu> \xk Ulb[y+ñoPy[׾iZS`.2jk |L.<'g.t"d# &0 !^g64 ъ+ЄYyDb8&nqqܷ%uVd%3Q'SMnr,,3TLf'k^>3#DC3@b ! ]`BXҨVwT cNSrY!(D#!JхhnT&< oxi t0!0bEӑrW4 bd@ƨ_}n[Lsi`b#f6Xx O&~WxG!"ȿOPPo"(0PL0ɗ0zRP ܌ z"t^1g+$qLg*^Djd 梆br&oH%w OXm#I=rؑ))o  ガ@E6,M$""i XXO:(,K P" R|Χ|/iH82eQ#%`'݊%f&Jsv`R*-1!'S1((2qBҁ 6k6o3u?.*`hbu+iIJP&s TֲF9<R2۶ӈ-Os@pO0܁ϦC3kKl!+osڐ4y4#H(3n5(t!-LAT"TB'B+tBA4BSAC-TB1C#TD5DODKB9C9DUDtDQF/A#7 $hh!8_8g |,4,:qP;FE/[|0O 0 =M3%O1G { h@R5 Tt!A&@$kC=i\ϕ\ɵ\5]']u]5p\^u^]ou__յ^\ PQs(P5s5ac3Vc36dCVdGdKdOeS8"eOv. пA\qv jghVhg|vLv | Z L Z h}ev~n!VhC:v~l˖l T5I_VnnnoVoϕ ?WOՉa aJTF%q+Veq#Wr'r+r/s3Pr5ro*6 - pAvch;|9~P|htj`V%gi7hOQv'VTm!Wz+F7alt oc>B=O|W|a $W}ח}}~W~߷B*2@X۽o PTczq mT!>* mZ[%p} B\A/Oe>g Qkq  ~I9 |A]7E;+?aeQ 4L`Hgkos^w{+b]eԭ]u՟7)x!%|A"A^^a١:/܏ &a~OubǍ _ #_*Xf'7{~@5 :j;%)} O !amAb [uͷ.ӷ_ǟY17=ߪ{b;;f>^ɳ0… :|6Yh1Aȑ$G#ʕ,[l (p2jv L]Q<}>"Wԩ .b ŠX6pҹ;lrں} 7ܹtڽ-][ ŬŌdE+[93|lDPrbUAЬ[~ ۡAb0<Ĩ"50:_HS{t@P [=ҮDon8q:`kg]&/gٲ`H`F1)r`~=Xc-Y8Ydazșg=hDaZd"B($Bb4θl6c 񈣍fc4A2DF$opXEȃ )TPrUB9SO[>@nj虜`Tgf 9r"z~'F_:dL6Ajj"Vj!hU^aKE!8eT1 @L9::lXkQn7Xmym qBI%lHjT$(Ll*G8jT! ʰ  P7iOf ,,${Рrlc"8]t;栭BMx]thZeV_^nl.]C؞vm5Xk+I-"u^ bm;,ZLדxYEj);roDOe\S ;248>z[XNғ1tNİQH{4!)Ot.4-7PX?_6r#ha,f1c*sl3 hJsQ@4ieTƊipp)0&ũ`U(-Sˁa{:P($HB &~z# CA1QJi<@X(GgX>`+$֒0уv Sa2{5ӜTHF2nӠ uD-QԤ"է?USbt1ؐDOU5adL2;alk0ֲ+]37փ!rk#aHÙbPaqc` 33@F?* Rt0-b n@z joTרTD~jyNJyXV9A@ӝ`V9m!)yGV2r(C&j`2v_&a]O?rׁl#3?DTyX#ţ-]؊3=[ hDc,n_ 6d-7'[/me0Y+$ٕ$&1d"$`/%9';@[k09Xnj*Xe6@!Yx Lj,.8RPͲôkGhbPQ HG:ҏ-]GOz*43 jP{Қ4EmN:Ԭ.u58]QZT%#y,+^W؊Ulr*5rJΕvc FiU.+"^kps7tJ-kb3T,k:!{CZ |/? x$d8,nk|/|  = l$+SN$ V[՟r2dG}C;DJ36[Loӟ! ?bb 8ö*ǿvڱczԸ.A6dnr4sZ|f-y}F]hߏZ xo[ʫ}8:dхG}> Bwt"ao}ew쯗 g +~]x~!??U|_~}Vh! !v%Äx@AȃG؃HFHBȃMHHMhSE>PzSr`gf8Vv`lȆjxjnXnqeerhwȇm(uoXdu`?cpo;HhX @?chzGh,؂~TȊ؊\x>RՀ5jp(HhmXj0Q!auU٨%H~H~6~8bh((GHtQzb w)weP,p8Hy`dSr[(8')XQt4` m;ɓy`P(t - 1+hOY /ґg(D1x _ a)cI_ 0hYkɖmeyiyoIuYqbْ.dQ{FJP P E阏 )I9`P0 `*[Tш%1UɚUY[WY.oz雿 )Iiljɩ˩{=\(Dy` )Iiٝp p gTs\PI) R͂4eA,,k/=ɛ)z z ꡿zΙ=%9}l-/ oq*!}Ϡ > "Љ`ңCŁٚZZ Š]Y7_}p yY!}g: mY` wʛik xj$ 7}Y;3"FJ2BcʨH7!!25Lj4Hz ᩖxҤX^:Ta)xXӕj}Л00:M }: 1Q1PzxZϚy;:b`犮骮ʮ Z,`ؔsQ.Jq!h ډx`ReaLzZ! &Z=Z ۱M`  zj/˱Z7۱v4; I5{˱5y I=j]hVYR]3e1 ]D qۣ Jq x0 =srAWQ鐰exx@ ʸJTB**J'H2N{K*[7 _.VNk d{ qM5;r@,}P`qںҋ1[ m`  z  V0˻ 5 ]sQ+@QN*914 (`1J(rB A`P<l`!U=8/̪`(` jэg3R8tQ;%P@ 00VPU)ۼ ʋLp{@ %ǁܱ۱j;0@%%7\ %0 R\_UoJ Lį<,Hb: ^A.FpFگXdpK?* @Ѷ9PA`a@ ZİTB]`?lJ빳9b ֣>y о0 DL˲.۾0 j #_ϲ y| Zżk cо! } } w, ;]P5 V;W,k;` 7ڦ4N޶@=>#g 8n wj C 74.XQ~3>F .`VF^3. V0^`3 =1! 3::^ʺ\.^lp ֗Upm*n= }sEZdьe鐪-@?y0`(^⤛Se4m@P6%@a 7>KNn ӏ0L35 _͎^cP0m,;Nm> 49~4..i< Þ\@ b){!\\uC3 / ong*⠋.ugcƾj4 ?P,K`p6=X0`_~ifLV9cn㯋 ? ϗ@G`l ?nͬ??#BD?OKO4<^l 4no NE` V _o^1Ɩ-zP’ nBÁ2*KV, T L (Khp-"ƌRG fjˊtETrJM>U^ŚUV]~VXe@W?ذ)W\?bMgn.xOr.] 5r pqu9@o\p93h(TkW5xfWƝ[lK\p劺C\*U͝?~-ݥB$2/x `0 8P -K/@~.ڌi&=+`xȖ <0܃f*@a1 Ujp H李ţrwFo1DzZ܂kDR"%y &|Rgrl J(߀K9ud٠.ř`/MnM#BELQZf 'A}t&OmTRJPKiE54RZ"ɒfB;"(9*g3]fM[o5תAg| 0%XcmżD1+S3TV]msW.;ZFpe4]uWu]xeW\z ]}~W7]" @huۤZöb/ʞͲ6,a9d g6 UZg6KW%u@g:h&h5hYtr{fbW{Kc*e<:0Wf9ia^jYNn;oviw &KǏ.̆ey'"T`پ@{W|?}}ww_?~%?|ߺmrG=]ey\6 f-/x@'=5fֳNT`?Ѕ/a e8CІ7ġ o;S` M8ĩr(]X `ЉaPĸ<!5=" KI1 ьgDc8R0@=4`:$vюQǯC(y,pS|3-zBt&{kd&55 ـ͞,0AP ei,p(t"ύ|D&C `'Lf6sj]S<8 f6uԏE=Y3Lm:Xt6h&HjM$f:0@E1PԠ-(@ЁB{(B P hF%zQ:(G-ЊjԤ (#nORKM{:w4ӧ azj< bbr$ iSELy+ hWjVUvի_kX:VnU (X50ANӸ(dY \㔃;?!H?EPcb5sl.+K5[p#øU12=Npɻ{$04<,@z%S)ਮٺB)ĩ焝c2:ׁ'o`ا b $8u ^('!HXU9l6A"*{Pψns\%RL51NB{r|a[PFa6hjU2ph"thF7я^>ckBтNΟ|.Y !_F#G݈S͛ԙ9;׼8z"gN*fPc(mz53zӋg,mnwۍĆoCZվf&+[6˖xS=o/ HF<h(6:LGdldG kuIG<qpp !cB tQl4ᢴcDӍFh5G;٦ކ ɭv4p=/ >ܶXp dzrgG{͞_IoǸ?q|j۳=,i?xKA0Mb?:NM|X4yw׼שwSݡA<=o>0Y1{GKCp1W4NɟP@}~ %s/VCF rp۷W+g#(=y,U6K&3ƣ+X6`b8@)qj.gW0=–{4~8Z€O8܊Aȁ[2]:>"9#glЅǘ#t#~ .>B. 1+3\5ԻPQ 6l =71|6-v +9&Ct 9K ʆ(I4>..s5Xky {ɈA:t=W I9+@ܻ $t@&D4F$^2px+A4{ã*,CɲS nD#3ɇ^Ar@A6}€xX2WǴj #E]PFb,E_6`@*cDHC6x '+¾xƊP;D 8 ȍHHȐIȒɏH4ɔH ī{4 \kTT0 G ʡ$ʢ4ʣDʤTʥdJiʨ$x @#O`~d?b.ڍȮDS TKB T" 0E@kSɽ˾˿L7FpЈDLˁ `OI?t-6p 8LԪILʮ$c˱gHyՋĵ4%bH.jd@4}`Td\m)ӘHw8E]#1=]V$4O\9UZ4[E0Ѡr8UT۰}5:] l3۾Uu4%S{HO װ ؝݅<XC…X33_udGw,^سmG|ǭG0a#8\p^@`H#qs\W]է a_Ea}Ծvd N]"IIJal⤤JʬD`"uVWC͊ }gN5y|z`|'V<`GH6%|4"\K}Ȝ ?Ъ@L .kb-I0LdKdMOPQ&R6S&eNS_~ V;`a&b6cFdVa^&4@(KHNNmfӈq&O|Fo <ghdz{|}~d(5Lj `|8Y98\BX%xπ wjF=іvFQ1E~VX!Ќܭe`E˹6 6FVj]C.61ֈ h6Pq% ?xЅhFN%'5?Pc{0Lˆ~81d^B^VfnT$ojHIȽ9f?x:WwPFPfd^c߹<;kmk ph†t&hVf& #{h̑~x.xJG?0*h2W{ȇ[@0GhvoP(V_ `SVKmc\EN8#(ҷh4:kozpjjhpp p gq wp  pp GW{6qqgO_̶ܽ ^x#p.~$ p&*WrV*oxzkVp;ފ]󃳿Od~MrIp ?@t@=CqCGEgBsFsGpH 9"ϭ.ΟF6(k^2 )r/E]``NWVVWY[]YuX\gu[|0])~7T0vvŀ˙s: ahJopq'wpoEuy oYW+ E}~x[PA{63TC6|I pw@ !RsEn/?/"RXPgVry/$xds hv@iW^.\l71W4x:n/L'~5`CI|ב'y>&WwX7,OڌO4x|p##aHOtwo/ȗ|Odo.N6VĀϏяJϯRZhe٧ڷ}w;>:z*s|[ɇ~?{;뀱7 P G#py;;Nf0` r"LpB.!Ac㢓R*`=rکZjsj 9R?CK XS*W#)B֥qbۊj4߂-49kB(FؓhĢ";G[=i^2ܰà>r ? D!#'i /ؐ* +.\(ۼs6lCW?kK<]՚JS!qť_.gd,D,XtЅd@@AQoٳEH22Q5֩%A '=i{3'? DAr8"TAs\&C ap`@@paD7K`@=PK1/`P0vVa PdN]QLM8inTNPIܠ0/ȓ!jhD2zeި(iN|gӜ*7mN@@+{4A rX,cB6,e+KY5lh$je5 BJDp<%-6 0D>R) q F?HՓ25*/x𱅁!#xҒ(nTʖ5@ueQQTV.D@F2}LPF: 6@L+@pW`Ԁ+˹` xglNـ 7*^1[821kb0px|9.ok##$ 4ڄ65$(#; X,LfVȲ2/Z?̝2s([ 2c9мXo{g=]&Z݁FC숳WKc"@0= 48Ӂ.apM4pb5ƺ5{ #6C6퓈 $?7h';y2 H,P \6`d990aj jz`sqm8=q_[uۑ H^+ah:IE]40 LeZcsPPcz䲶 `f9/5εbHCЅ>c(`ǔ~efEn"Hm#-ntF*v$I8 ys_{dp;_`u)JB Jt:]N&ƣ#ci4z~@@Dr@Qij^Ⱦé]s=_>C?ҟ~ih8}h\o[F~K>%?mQ[lok]@mV-_!i@uJx5 e,S6(@$(rƳ|`9垧eP_*2a*@6LJ6`D9Y![Ch(X,XX-M <#(XFE`BRA%(x!x}jV2!%Y'T#z#Ba>x,m#hT E8A IĀdA BK `0ApY0"!a6_H3Bc46_\t]6D.bNi@AjFX T*BQ!_ڂ!T8b^ __99$c8cR)05A+2B!4Aܣ DL.BDhtc2蠬M".!@P34HcS]Hݒ8T!7Ze7Ţ98b-)n*'lڂXZW(eBY[ł%e\^\%%_&X-teF+<@<%A LV^fb@XK1 StAI ifjj ̀j&klΦOzܦ@Qe&&ݦS6B L^%7ziE$fvjvrgwzwnzGV@L@Jf1@ |g&2X&mg~~(n"'rfgz%xQ@rAhnFV((hZ"Hg diNVV^'!&-Ҩ- h_Ҩ⨌h(i)¨((>R8czgt^HR!!&4A2%XU}}ij"xSi@ *j@ )(jj A sFAhFD),,*jjj*jh*ʪh+k(j&k*Id#`¸b)kا#+mҩ^ Q>p>(r* ȋpi (@Zi khbrlɚɢlʪʲl˺lb#|kV|" B/UCii"m)b+r*,,Ve0pfi6m֫Z]ͨ-z^gm, X%Ь=I BHf^mѺk2.@nJN K @NFnN؊nLe|. .Jnmt-ufVo o2 x+x#TDjWW6.,D.z.8(/:@*n}.ȶh9p+Ȃ, AGaA#Pfg0pp{p 0 o0hoV.#.\@ O/D@(pCU~"Ȳ sq{qqqq •,BBbÉNoV/pr HA R c(F/߈EZqr**r+1 XJFZL}r//2+ <H3/@XbLj<1''.+L1,1@%r;;s*kl-Ȃ*0~ W8"s@ t/ s5A#t5[B[QV3@r@uQ̀I,5B@53DTtUSAS+W+5J/r`tt6M[5Ôx])2sP+=Tu`\ARa+SL4bCvdX'K5:.>$@[((vi(ih6kokvlvilvm۶jmvvk7m۶ok6j(`AE9(u)u+AAswwwswwSwwxy7zw{w{!aK}π}wXs8JGff/Z9d1LhHBqKSx[cxksxv%`RY!l*+U786) 9}z#z+3z'HA9.'L§+B zzzzz߁#l&073ڋ<##\A;C{K8ѓ[a: p6<1l: ¹{{{{;#V%l YO98${+|Ё!\8;KS|[uh__a{g6&ɣ|ʫʳ|˻|<ʃ(̾y^[=8B%l+={!B@ TS}_|u P1[/Tdv\}ٯ2(Px(&%rW };%`B,}׸ ! @ T3_=nW[sxY p̴s~'q |3 Rȃ&/ !ā,׀ʭ'~ X΍+';@@@>[ x.he ){~ۿFhB2@5 A+FK!C6l8bE1fԸb>{ev:aPTeK.)(M#V̘QJl; <yL0r(jN( hϽ\XA#MV*SPG>pkA -P&PQ !DTI)ô *3MO`FnyDzqWdDɣ:/ ;<AG*9\&]R' M\+Ös63p Ⱥ+Jo3Gy `Wm M470(E3QREKKTVC~9EiFW fY5!;ÑQJ5Ֆlb!~Uw{g&ZDvWriA5V]`oo Vv ([4\(ÛosEN4ל bH'Юa]$b I0 I^/O^Gq>ai>J0,< C6Qm6#~xհ1Gi鯟~wy?NǷx)QHEs.@ӺA.@S':-@Û@':X`R)*=4Ћ`CcB:n?UGDP0 E4D%.MtDx`w0D`EXb\"g9&}3 Ƀǿ};B8`k\e]+I"M.!@zD d  3)Qy#t V|bPb1쒗/La41K+0 VX+,әw #6 -n`,+QT}hL9щr "x4p Ѓ-FMF@$]p]2*'d'=ɷ9 @9NN@E[Y7 CeEn@%5IQR-uKM=AwЀfJkZ{iyQqRxUL](}E1jTZ06X.uc!YNlc+:f P? *8WE+fԷ!p@V0 $@o{T@m=nji*Y=qgẶ(`Jny`c2yћѦXQ yҚ6nwZ2(#r{̀>`/Z`w5'XSqGMjsyKW}{u51w^? JaMMWĆүO&W dǵIn\d7w M ? e6@ATTbu e=u=g=[0@vLgE`,d$sfIO3pDfhA6O}խv?`XϚֵucD$޲=F;lh7ł&<<ӭqDj\oMP.5az=RT;hLlTNo n?㎄AvxnciYL3݌h1Gf3d7YG ({Mlh:;(ha'L t醊;-Ew̳ձuhJ`~žy 2rw{j۽{ϻ>=7gmtO!Cb=DEohDv`{Ȁ{ٻ=N{ܿy{׽[{+}hS ,i7џ~w͏Q+oP':a=0`o0 Psk// 40g8DpIMQ0UpY]g)Xano6`}0- Jl/:g~ 0 p  FR - P7 08p@ ,!O1 QY(1!115Z.ݺװpm YP0y"q%8: )/ {Q"H1qQ1 1o00&8V'`1 #Q\r!2!# ѣ -/.I$MѲc"eB !ǥ)`&3/ؠ#aNr**:)(d"#)=1`R&Bv)y*308 #`./!1.S-%2! ! B2/s5Yc)Q0#M x7}7387@*pr-77s7d9s:38^"AJ6r`Zs8zs99?3@@1"Pu:r1 <%tB'.A ڳAinCsM@3s:R?A4E 69bu0C/(4Gs8*6:;i@>UDR!9?`7 HTD PB FMG͔</C CAvs4Ks9DZ74PtNM4@@MBMuŔQ3 LuYG!AGQE#tUYU]UaUSV[`9c5Wu5W `$)U#RQtG5u>t C>atbZYV Sa);(taC!Uw5]u]uO]]]4#!aܱTuOU\bv4 aP@5 O*"C:`[K[[RTX64tA@U5dEUtd0.܁G^uLr_1Da`$vaG*abSe!,r`;7Vd@ 6kvkkk6lvl^ve#6f{1_c h++& Craja!> nX$6~& YM8\Mf#$ LtQ7uUwuYu]7u^wvg>m7A!c Q VSYhg1o1$:7hY C!P`0:5*  nSGib Aylf~~7w7'u"AcxAGQbp&0Ә+6p0why!!J+%S׶5b4$: 6+D -Q+QH}ֈ|8xgXWAAX5y0q3ʆM=cXao; Ig@:E0aoR$V4LVW5'QF &}cv 9%y)-1ْxKW5B~++ϋyHOZiErWh FVnUcY*͔WY=u s[٠u5:mXR6Ǹ y-7p]Y/aY}|+1Y 熝:g49Yp)1zOuYxQ~|vꆞ=ў7?v'q)C8 i{ڤG~~@hisa>O?Zx5 b ~aI馥i {놅b@7fY9 گzQr :טwu8nOxbf h JYzZ1 ^v_ bM;uw~5~o4nAwg{J O;~AѲ7ڀ{8Y*Z Oě;{ au 5||;Y!-dpwq֟~!>%~)yQ e~imW>@;C'!aT>镞Yg(Fn#Gm=[!#P6`>~^ra+{8(~H\ VΞU ЦYa!ЁHMQD=6?P2yimq?)9#P3P& aA.Aف!da6A*aFAdayaٟ͟_TA+AGAn #.=~*\ȰÇ#JHŋ3jȱǏ CDR"c˗0cʜI͛8s܉SF@)f{pU:y}h\mbTi,^ÚM6nܹsKٳhӪkWlӚ %kT%F\݊F]<|A LÈ.J3˘3kvb,%V1_rg e`g3&'? +lmdD32@e<wrRY>G$ %)P A"{lJM80ҕD4F)\JNy6N:YChrl'7pL}<@u4n哾C+ҁmњ$QRHzWDjXRQUA֠b<Q8 V$CM,K ⥕D Uqa7+97Qfc.d7XTPPJ"O ao}+$_<xܬ)WiBQ^NjV)3p;F垆k.sX] `ؠcQ8qA Vn5rUK7Q2B FF [X`®M. Z#(v o(HPZ.8αul\8L"5;v pz'Ǽ &V98e#{yuqҠأ03LZV3p5,:3x.0:yq4iB9Jʌѐ>2#MJ[Ң<)QmQ´3b(>K-y9+m5!kZ֯uO]:׳0mmk@Z֯&UJC;لQ*+ҁtxDn{K AqG֗-ɈM\ C}%Q~lhc3tf]t⮴K"Nkܝla}Ap< oX _pzx]\X1ws,8h@HOҳc>m6'tƨ(($Q2L~\\#&^[79ڻzB='rtї}z}#2_s] ؐSg]0g_w5/W{b _xؕч 2ƽS7;Mz PƟ-1ΰ}ȏbL wx6ߧOۓqTWy(ꨙ8koO/Dloyp^*Pig uFp( uX ( u ~7g'Ɨ|( c*p*ww8Hit/kq! !'W'L~~m$ Ҁv~ H`p' gZXgMPgЁDpgҰNׄzXg'~8wuG (-66qc<4ML"7pOQpaF _S~!  ؊u'А7pg  7f0Pvh x[c(H`&H}s˗0 - M 8} 380Vc؏im9C8&asz% Q@ߩ)XI1ٖEmiY0SZ{Жi.L:9>D4uDP IQr:hy0 `w {0`1 j`` r6gΨ yZ[>-Y ] JJJ1V*KJ$6E–($Gs4*`{6CB `ZFeQ0~_i/FQxc`i ,WPD` x w* 02ʒ k[Ъ >QzB1F53fӬ ʲЫIJ$.ѤyB"*'k Vҥȷn_ K/20( 'b  Lx< ,tܖlВI9!#MƃT٠ Mb" $F$} % "=+&mғ(֌@B @E]DmI LG}NHOO} X IV-MUacYbmԓdЎ1PRX@A T ׀؂=؄-c ؊،؎إA=ٌg}ٜ٘ٚٞ٠ڢ}kc 툭x0 ۲ۓPۓ~xl]`9\ƍʭB @= c@c wh` ހM];=ڥݖ=lf]~ҽ ؿ :h8`z; ]*^3.0meWvF`?B>]Fۭڊ @ P R PUXZ庠x :]di> `ji%jN cc.v*ew^oh^腞+{2>}l 9l0窢>^ꦞꪾꬮikk ^붾  cL07@ vn:掠lNi iN~~n<@n馍.1].X>;^n* ^L cc  +W iΨw0מ"O#-&..I 68oA8ק^ ?i>C,0dn v^ ; C0OV0n@ # (r_޻, َ<A  R_ B/ 0 B BBk?nr\=qO H m ڰI1F]Bp@P;P ϨP  _ j_ /οQ: X]T0{p ."РEYbÃ'!̘q=|$y? PrKlI_>{ɛYOAϞ<{=}KSQN CԚ!j0i*`M%+w%۪yՔL)Q&\aĉ/fcȑcE1^@g@2.b kNkk6c|fߚwkl`qO\%LAoO~ك2u;Un%Oɚ} c OgE%Nw 4@TpAL2㐓2(  cdج5ڀK46M&tEc-Nz9#Ƹ G&H*4փ "kAC1!*y*`d @H.4 XGqM6tM8 BL+E8X5אCTnK NrAws4J9cFаTHHdPi%[$GV\sŵZouU+<` [o LЀegAJatDjsVsɵUMYu]x}P^NBѵz14N_N0#|CYc"B/ѷ|e = ,E 81BHt  =u(& Ctƨ{P r|5:y㮕Vf)h>ZivivӵJJzYRhP-kKoef7hn!zUH^N8nzϾCi`{/Ioס=vиPdh@KD .aYBWxzu^U\5o{x€f䉿C 1`wzn~{v7,R`eCCi`P0O+b s `u5rڀ 8Ug=i".$y4 KLxBpV6>|aa$w/4; T0{^ طBlpTX( ~. vg {aW yQbĞ xF4ƿQz_rW4\h/!2ЁsUȃ%B_,D7n WJ=.Qd˘FPRo<0PHrvU1~ w>%PH 5яW/!IsxwIeӤf5QfS۔QЁH:nFY8h՚5s!0(L I9, OH3(H3wWAL6Ś('MfTs*ɐ|Ѳ AYv%@FŸ>BfrHE`"0FfC FUS*)Zaի^PjVUnUcjW*Vj5TFVrX*eWTkd׼^}v+_z]bWv|a X)la9C}9tli1˸@b;dTVmmm{۩m(6x̴id;9ens\E6znx挵t`[W#pVe)ˁ/s{_ױ[ "SHm"-&#ĖFX` c*8,UeoMlB*-o]bR̸ʙ8MS0,UQBC& L瑄lɢ,e^[20 B6N.@&e!cɢ@owO|g<˽AŃ!ZЃ&t }hD'Zыft?` ؽ03Dt=}!{&50R: (;"@+D jNgb֪&uk`( $[fvlhG[ӦvS%[D2` *פƵ(LN LQa4t"Xj4 f($}{ߒٚ(Mdx`ܞu-~q8 ;T=q\#'yM~rsT뷞136~;qs\8 Y7BinzV孁 `wķt3Z ͷc]Ϲ݉\@ys^q] *3*]{{w^L=A46w{Ӿ<猸y <7?5ވN, @%?gKOʷYcӞ mm}{lxH8|'_#,^ We&ew.H}7{<;-~H=?v),#@;*@|,b~RS!>a Iȿ닺`ù9/@< @#@x39>?3:kA <?",(%p1&lB'tBـ5; >yLù;KCN` Ȅͣ=FCFKyK==6 ›!4BC<Ěy.0KxBHDI,-)4+ CODPĻE]cOADx:S@OtCQ4ERDL?FOEP4FH=[$MDDf=E$6#0 @s4kFlFm<u2'L`eNEtFYY4&;vDX:891Ʉ 95XZDwlFPy,4@ ;7RDGdtuIprlF8ELp <k[IlI|Ii!Xm {t BbdGIɻJv,hv!%cTJJ` Hvȣdov:lKvF@Irx(KK LkU,y)lJ̶G Lvdlʉt@LLdF(AΔLvJgpŬK=HVHMMB1bS#TMlN4,JMtN4؁<ˌL֬SpH{,ϸt8\OlOs,R pԳ(pbȅdMIX бpK=P|Lbp(wOй3 pYOtO-,|O2 pHgJ`OLxQlPS`x tP%;IR)mBN4 yЉrhlȆg }Sd|}SFǤQKRLRb "#E %_k 0˙RE]T",OR8 w/M S1u2ͅ\03S:#TmSUmS$U]UTU6US0"U =WU` g g RMr8Rw@TBM֋ wVmVjjH`pOAM{WyR`QamQMONgTb S`QP WW|W}zyW^v-uMaM?5c WemX+ɓT.XXXXXX.PbOm q=rsEXYu]WYٙEXrUq@-Z< Y]ZmZ}ZZuaE U/]YmYڟZ ۰ZZZM[DTі[}[[[۸$=Ĭ[[U[mX\-\=\M \m\}\ȍ\ɝ\ʭ\˽\\\\\ ]]-]=]M]]]m]}]؍]ٝ]ڭ]۽]]]]] ^^-^=^M^]^m^}^^^^^^^^^ __-_=_M_]_m_}_______o__``.`>`N`^`n`~`` ` ` ` ` ```aa.a>aNa^ana~aaaaaaaaa b!b"v܀;barbican-6.0.0/doc/source/images/barbican-overall-architecture.gif0000666000175100017510000006132113245511001025163 0ustar zuulzuul00000000000000GIF87a $$$,,,444;;;CCCLLLSSS\\\ccckkksss{{{! ! ICCRGBG1012appl mntrRGB XYZ   acspAPPLappl-appl descodscmxlcprt8wtptrXYZ0gXYZDbXYZXrTRClchad|,bTRClgTRCldescGeneric RGB ProfileGeneric RGB Profilemluc skSK(xhrHR(caES$ptBR&ukUA*frFU(Vaeobecn RGB profilGeneri ki RGB profilPerfil RGB genricPerfil RGB Genrico030;L=89 ?@>D09; RGBProfil gnrique RVBu( RGB r_icϏProfilo RGB genericoGenerisk RGB-profil| RGB \ |Obecn RGB profil RGB Allgemeines RGB-Profilltalnos RGB profilfn RGB cϏeNN, RGB 000000Profil RGB generic  RGBPerfil RGB genricoAlgemeen RGB-profielB#D%L RGB 1H'DGenel RGB ProfiliYleinen RGB-profiiliUniwersalny profil RGB1I89 ?@>D8;L RGBEDA *91JA RGB 'D9'EGeneric RGB ProfileGenerel RGB-beskrivelsetextCopyright 2007 Apple Inc., all rights reserved.XYZ RXYZ tM=XYZ Zus4XYZ (6curvsf32 B&l,pH,Ȥrl:ШtJZجvzxL.zn|N~ H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳ@ JѣH*]ʴӧPJJիXjʵׯ`ÊKٳhӪ! pʝKݻxw/ xЀBÈ+^BO6fLpoXK%ÃBMӨS^ͺ5kDؼdCsͻk00Plؾ+_ss~r kνËO| $D˟O}g' t-a dЁVhfv Au d A,0 쇄 A@Ya8P ^pPF)TViXfNzA Mrih1mETpNˁNnj -Ж ~z.Vy.[^kjEdpVKp 0_K+ƪ+Ac,$l(0@4l8Kp 0hDm~vr lPG/.)Xgh-7o|K7dds5L4ހ1{{Ёxt۽CpSуm8⛯:O8GzC/BoxNyJnk _'O3N*_.Ϻ9o&!$9Yٝɝ蹞 >Gd ਫ਼yɝf"ɩ*ZڡR460mԶ,ڢ.02Jm 3:, –D$)v)ʣFz0F N(`)q POF8`)QɚB4olڦnpr:tʦ& Pz| ey#Jd}z ١`y rzkZFt$;J$I gLT1DBꙬڪsMBڙ ɉt-3,eQɺ%ҬXI$V**֊%Z+jz)E0#0И #r 3L 5ڮ [ k Z +Kk ;[ K [ 0Pp(+K 6+ 8 :+ <۳[D13kJE۴9-R F{VXڵB` bd {0n p r [p@|[ ~ {;jSUB#'dp %LQ=꜠;[ ;[jE;+*id6{A+J ۛɛpxK۽껾}о-[yp{{KtI , p1lk+h8;r`IGȫ^Q<13br29L ma*=`A$I{bb gPT#`bQ0"=3p] .$~*)Ђ/w=NRr|8~:>>NZnB~Q'I^^5ZO1QNq@^~aG3%nniNk Nj6M9bk|~-DnLK^ L)ln:_=c]͕@A@tI? -YꈮE@@>.ǎN'ó$nAPPws-w-ݮ<tA@?An=.ڄI"?mU4=̡Ik5@^")ED&c IFpG_TKQ<@DpT׍$EPE`¸E>.r|Դq_EDDl<_`T{Q% J5HKĠVdc[HSAdy5věɔQG/%ޮ%lG᤹S$q|`$җG·X4Gò<@1qj6T^:nW䇳pw.(ڼ"*<682+72("6%$"",4*F0*2f<8Z]w++ &..& (88Z78tyw+"hӑ8  (8* 0&(0&:t0H҆ pIN@8@D6`X &\ D=p Xpa\2XYggׄ=1+<ՙ(0\0t,Q=Se [ϢFGEd (F-Ä"8@LB&a2g L'}& 4 HT& Mb2pfkwMrvm۷qǎA la+G2 'tgJ 9Q0 B/:FxH (> *YSq * b Bi 2` D f`j:0 LHA )A1?` #x@ D,' 5pH$ѕ0q8'R$p@>W;\ƂhB0`4Ђ3*!8$@j ?d%.Z`GL8`( 3 N/ 8y{(D0\BG!  8ʧ@;Ē` "r©JC(  + 'MN!r1jO&0XFp.> V.*d&@EBir[l-HP5 v+8`V *1B2Y{ЗaYf#MGm#x^@ Gxb:*h|ԝ@I `<($x`mi[5`L,Q;1ŏh21(~L NqE&Thl-ڍw+ǀ9M)Ӊċ`< .;* 8L1XJ '؁Hݐm6AK/|&MH j| $yhz}2 `y79b) AM>O$3pVQ) (hˮEI%tML%W Hȃu8HFB2,TQ8j/vBmqi֏ J䇫 !pS.7Pj1CINcj(Eɘ2aUɷ?G-%QD(pYS `Z ͐8Q蕢R(Y<OUmY væbr5ՑS"{1eC jOt `&P3U ;XcEVp'Y]3tRɚe=ubEd1(KYBѕ0τU[Si3Vq W }7J_T-_6Ǣ(2#łYZdRE&1(ߗT' LK pZf4߻qpx:DܥJ4@nd֫@Kt9\-h SHp<)}rQ"4(J\2GJWVd6`ԯa9!꿩<ؕ!}NOw7hŞU,,WA8*QD / %=PMy j@G1yd o@ RP =ɒ-ڷa|{4[+| ,izUgb}?=b@m䈑YLpyP@(pVދi>G>OT&,@$/#G "a'",Bz$ Ccn`i48B,|ʎ ٖ 0 //A04K`b\`mL,g(@nDʐ!O q(OQ2n.O.ЊpW(1 #&IpB` p+1,i  kwKp'4q8p"DM1!'1"Q0- n;1ŁBqwG .fq#NQъXQIQ {r!\nM 'R F2è#m$w r%IPTS`2&er&i&m&q2'ur'yR&w"U>"2z(R',()g"nA)(b+2,r %rf%E,ٲ-)D<%"- 'B`/2t8@P0M0NP1%s212302A34cR&0s.E4N:@R*Y(wc8 6 W6R LIh39S%$659m]>S9*29::wr5IArMMcFa" /NtL4K#9m!NONNHMˁQuQ-UKq´R/5N76NtT,5 S0UUuUYU]UUeuViU94 SPV}W[wVbrwuYADi#Z}N!TP[5X_g\WUN!MI]KJtu^^^tm'__3ĢX7`6a`qxMa%vbE-4 Z'mb9Va?W6c(8cM_F`Nve(G5f XImf.3:pEQci!!jg"zh6g_6:ig ie6kg6k6lvlɶllvm6lavmvnvgnvo9ko6popp#%qn6r *Vr17s5%Q.wsA7t@q/sEtQ7u9r//(Uu7K]vmvt2Ww}7A6!!mwx#xy8zz#p z9`z{ב<#{7|ͺww}ٷOa>}{w~~ ~w!~} 8f}8~8w%i-8}5=8OEM8#UXQaTWc8_q:|sà'L3x8x8x8#{{X&Ÿyxٸոy؋WwQ8xo;`(D9YL7+X-w7/y9KœEy/Yrñ;u'`Hjs^iK;]w͝z獾_1!m&>P$ "Ҵ"ǼAϯ jeAU >t>~ Zdi3ҾXe\A^^ ^ ? }ܠ2>yV2pM nj^!8#'m٠~{"8% bdjc2Oħ=Cތ*b * DG6@d~͸=͟6`F @ !`6Qpf2ph<"ʥR|)jb-1l>c=b|5C@e# \qmb(Pr rm$Ri&Y0@xLy@,mnfpPdmH 8)DpMF"j(6<*"M]fj e,8lPeL6k^ < a19@2'5X/gg9A (p \԰AoI+ ,TZpBB^H`„*rB /4zD„A";JpH -\SQr.ٰD_ 08q*QH .H !.7` ! L&ڴ)$`SdÂs$xL T ! 2P"+$ pH  c-,w>?-z4 )U|(U89 FV=` LXLHPw LR`j;AIEJ Ի Р8L@@R2dA]E Ѐq[Wwrq1\bA8ppiH4AlPA'8`6FDOK2٤ga "2Hԅ !M(Lp{$|p.٢'E!OE"T`@D0Dnp  x!ȗ9t0g jpAIp,JvZUa\En^`mȹ@$;J#4@qN)Rr 餹碛n60|h&|W l@DȿZv)-@zb:]O㍋J AyCH'ADx0)fu!O71PW)fE5=^V#nS;l授(ԧH,e+2x?rԗ5-.^ :x8BD̪^p$U2f108Eni{dnPl<+9sT!n N%mFfP5(0V*"b<4X+;F)wnXqZ/FjOk@8ZEf[^G iP:%q0z9P875$7R[>Mr3rAXH66*n3 ,༑{Dn55❫>j9nͽb"iD%3P0j l@ԭhfBӦvOn #3 ,xb[?m! A92ba  Nl1Т>4#|bz<&8nhꝃ0U (Ӥ}4=}knTzl2@*Ҟ6mmj. )*gal kdK1l2 ň-6m6P ~n߈Җ@"6`(A x Е[L@q@p\VqRW¸{< Nݛ*xUGu 7,_@Ξ:CVvmv}XUmYe فU1^>}]:^%^\~ϝXxFuʍ .c>zxʠzЛ>쟏b+t=k/ۗ^߽syC?|o懅]%|僧u~1P?򟿸WzvO ߲"x0UB`G-@\Fߤ0PCr` & i|pU ` `` FH@Hu@o"a*2!f LHe@4aZba δ`vat]aZldV(@aF-La`XWx"艾u!U0dVa!"b$UDKH}@*Ib'Y|֖`" "y@*C)"+^+¢YP)z$F,⨽b/cU`b@c3bY3Db6B_ 8c88c99#8BX@@:;c9:Dt@x`[4c?9?$A:|eq&zA7dDctDZ;:@-6zd5$b'SIIdJI JdLʤIH`#>0~LdO$O P*Yt@HBeS$D9T~HaGju@.PETXXeYYeXOZe[YZ*dW[]Z^eZm#R,eaޥUԥa*fZR@VBfb%Hce^e(BVGTŸaf/^fhEp@`/&ifkZtddfu@W^mfnnfomPogq'nfp&>psBgnFBEguZgrzdZv>}w'qlgyAC^j'tB-' W>zg}Q\zh~h߀ʧBhih*h~@Zhj@L(Vhb膊舖A>Bhh胪hhFhh(&(BY(hf((i((:i'ڧ'Zi'֧'ziB&ҧ'ij#'i!'i 'i'jjΦ':jNʦ'ZjBF&zjB&jnF&jjB&j~jejzddΧ:k,`JRkZb Fz뷂k븒kf{k뺲kN+kf&i"& kkxlwl{ҫVV BlJRl0ljZlzǂlȢjV (rJʲll~lll~$ -lkҦ2V&Jmnb(>-z^jEj-zF_ؚԒm"-ֲV۲ Юm& m޾-mJj*n&nVB>:ZnVnFrn玗nn颮ƕnnҮnnocz&o\qemiJ/ue!,*BV/zf!:po*I-+)~V R/Z(b[گ6RpۜUo)2N+do>0[O0z$NFcF&?Xd`$Z5v( $ p 0 p p배)4u0x#1 7@F&qXխUJ9PdZesq{q#@f?i:jqqߐwĘạ̈̄'@(l!kCZ@TTqPcMr%k>V8E|-D@JI1^.r))r***#~:gIJQ@ʶr..2/L$b/#s2+s2 2@H42[5c3)(g8s2#X@401;3+wp2Ȉs<>g4gisAlF "Cgj$@403Dk46{ƊdF2KtT4tI/oT׾M/ .@tJgAKKP2#fZ5Q3)Ltm,uS[.Гv]uWrtT{#Wu)'%tYõQXđ#\u!V80nu^o8(]g@AY{f4radbuG3ak>P6\LT(hRUoK#'uTh4JH@v[W{Hef_|l{Q+[6Sϳm$ wWnr[ qWht7 QtB|:"&%yw*x/7zz;'rw|r}uOA(1{Kgz7|z!w38}3xw)i&jHyzfzx% 8/8|&fwZ4KO房8xS8#S&8븋8<[Gtqx'C甧uBCbӸimJxV?'7w>/T*0ryo {3]ymvd* zB yTly3k):ZS9 :zu:7/9K:s@zC:Kg:;y4z:jS?z纵*{{ uK:ow{z˻q^:O;O:У_#;n33|;<'|ck[<3,|ů{Ƨ|ģy{||ùa{',7#}s|7ʋ[:O/ثs؛}=ї2==}t*ro+#E>~Gѷ=f23=",ݫ;s~(=ƃ[A=˅\CAɾ7k~6J'TF<27?KC#?Mаޗ&hCbxlYl>p>[ncr|Fkvf{8rz$]،\XKT\dld`İ$| rLU]emu}%$Ͱ.<=FN^]25(68X `H8XH8p6RvÎP8F(˨@&p@UpMC]٤JV H@'weh tSAD\iQN=A ,&AM >H@T`C`NS2LpL( ҥҐw`b◯IR@p"YU@lV GX 2`S 鰚B  ,8`Te*?Y'M{R`ܰRĂj#vl Tf[6vRGz vEK$+E"e@) >w]GATP ɜG3_H/R-T9Ai$*+s`9UKiTJ RƺYDB&je#Qmj"U 0' HT fr,(#]7VZ>HЎC!!)jS aGfX:`櫭y<5`)m[a^E.8Sa[L0#'>G&?1Ʉ-툫h\0`G滃QF;R(+]C <@@{wpe DAyJe I " M_͢s]@i< 4ʝLs}cYm˝kC 9?:|J)nE:-`E=&Q+ #cݡ,u?V6&(6T2k.;mQQAöA% t|8% *FAsߛ zo@[X:k=[z7Muӳr1wN^G|x[?;| )6|扮Y95E5-J9^\@ RwſDY Ll@xRfWO.ဨEZ< <>ݚ`e{{q^U9ק=?.P^W9c_B Rcೄb1kK죿x= (! $'ط4<9ۃJIф=>{@ӽc@˛v1Ds x[EX&cxS0Xʺ؀Q)𐑡TcGe %0&%,q; |)yRB;ȩ乐=x3LldI`Iݨ|2 IAa ѝnz5ƣ<*H˻ʳT0,̆99˩1 3:9ȳ*1_@I 80`~h ʆ넦̕I>δ.tLVʸԄ A:hM&҄р"/A"m?4EŒA㼄̨cn˕NL(AԡJB{ QKOJ PY!/hPsOM+MhO׀ ʔ >o ܀ I89+t5Ѻ=L{y #E5 h%TQ4 L8m` ӟ =ո8  XRTQmB?CU "#EIi*ayԤAMPeFe.$+)Q 褆,s$lWՠɶİinfQ[֪pcdEWE`Vﳄl#quh jSۇ6q\>`PX]#sH–h4l 6qZΤy)'GiZ, ye G(D<7%ؚ %KP ! p P-U[EY#Me`MD#SÎKHMz؛IH,xJ E]HZ:SK*GXs< [=W-?("3_>k34= ={4PURm;zF X4 6  ໲_f&n=C(`a[0XEL3LL @#lIb A"b^$~bF,_"V#\*0 t)@$΄6v6~}8c6v?"Fc4&Ζ7?N9'`*&c/K(=A9Vc/'.bIE⛣GdLd:>.0c'IvcMVvWn\e\Ufkf`F㶙1},0q&edm.tVkb_oDug_y|VgUbg}asgVfhf=n艦hƀ2&u~h6^$f>~ih% Ud銖i&jiqfꦖj>6fjl~jxksjeMh2jvkFjvfUR涮밆k볾뵮blkh>l6nVflFǞ8c챮lϞllj.jmiklkg؞jn퍺lۖjj۶Ϝm%9mmkq…Wƅdnven^~non +QVjnn&evofn@ R>y7GWp> n  /kpGgwdڏdy !ͥ?h"&wr\]@L'r"Drx H,'s M2w,؀.d1vwUR?&/kwTxg`DWu`7qk~0wewy RxGys"?y o WoWЄ%q9p|z?PǪkz ztoM@o%kzȒ $_¯HL r馏?WOǷG|gq:PAw/Jti'ȧ|od|gn|ۇ}ϏH#VXyؿﻇr w9J`{ҟq3oysx˴` oo"1p; P&qXD*MJԪjܮ 2`*!рH&º?Dpp Qd)iձqQ!᠀`p :JZjzʪiа8fyA+۪ܬPh \a11aamO&|Y1Au c:լ[~ ;ٴk۾;ݼ{ <ċ?<̛;=ԫ[=ܻ{>˛?>ۻ?ۿ?`H`` .`>aNHa^ana~b"Hb&b*b.c2Hc6ވc:c>dBIdFdJ.dN> eRNIeV^eZ/ne^~ fbIfffjfn grIgvމgzg;barbican-6.0.0/doc/source/index.rst0000666000175100017510000000170413245511001017204 0ustar zuulzuul00000000000000Welcome to Barbican's developer documentation! ============================================== Barbican is the OpenStack Key Manager service. It provides secure storage, provisioning and management of secret data. This includes keying material such as Symmetric Keys, Asymmetric Keys, Certificates and raw binary data. .. toctree:: :maxdepth: 2 admin/index install/index configuration/index contributor/index API Reference ============= If you're trying to learn how to use barbican, you can start by reading about `Secrets in the Barbican API Guide `__. Once you're comfortable working with secrets you can dig into the rest of the API. .. toctree:: :maxdepth: 1 api/index.rst Sample Files ============ .. toctree:: :maxdepth: 1 sample_config sample_policy Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` barbican-6.0.0/doc/source/_extra/0000775000175100017510000000000013245511177016637 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/_extra/.htaccess0000666000175100017510000000422213245511001020421 0ustar zuulzuul00000000000000redirectmatch 301 ^/barbican/([^/]+)/admin-guide-cloud/access_control.html$ ^/barbican/$1/admin/access_control.html redirectmatch 301 ^/barbican/([^/]+)/admin-guide-cloud/barbican_manage.html$ ^/barbican/$1/admin/barbican_manage.html redirectmatch 301 ^/barbican/([^/]+)/admin-guide-cloud/database_cleaning.html$ ^/barbican/$1/admin/database_cleaning.html redirectmatch 301 ^/barbican/([^/]+)/admin-guide-cloud/index.html$ ^/barbican/$1/admin/index.html redirectmatch 301 ^/barbican/([^/]+)/upgrade/index.html$ ^/admin/upgrade.html redirectmatch 301 ^/barbican/([^/]+)/setup/audit.html$ ^/configuration/audit.html redirectmatch 301 ^/barbican/([^/]+)/setup/index.html$ ^/configuration/index.html redirectmatch 301 ^/barbican/([^/]+)/setup/keystone.html$ ^/configuration/keystone.html redirectmatch 301 ^/barbican/([^/]+)/setup/noauth.html$ ^/configuration/noauth.html redirectmatch 301 ^/barbican/([^/]+)/setup/plugin_backends.html$ ^/configuration/plugin_backends.html redirectmatch 301 ^/barbican/([^/]+)/setup/troubleshooting.html$ ^/configuration/troubleshooting.html redirectmatch 301 ^/barbican/([^/]+)/contribute/architecture.html$ ^/contributor/architecture.html redirectmatch 301 ^/barbican/([^/]+)/contribute/database_migration.html$ ^/contributor/database_migration.html redirectmatch 301 ^/barbican/([^/]+)/contribute/dataflow.html$ ^/contributor/fataflow.html redirectmatch 301 ^/barbican/([^/]+)/contribute/dependencies.html$ ^/contributor/dependencies.html redirectmatch 301 ^/barbican/([^/]+)/setup/dev.html$ ^/contributor/dev.html redirectmatch 301 ^/barbican/([^/]+)/steup/devstack.html$ ^/contributor/devstack.html redirectmatch 301 ^/barbican/([^/]+)/contribute/getting_involved.html$ ^/contributor/getting_involved.html redirectmatch 301 ^/barbican/([^/]+)/plugin/crypto.html$ ^/contributor/plugin/crypto.html redirectmatch 301 ^/barbican/([^/]+)/plugin/index.html$ ^/contributor/plugin/index.html redirectmatch 301 ^/barbican/([^/]+)/pluhin/secret_store.html$ ^/contributor/plugin/secret_store/html redirectmatch 301 ^/barbican/([^/]+)/contribute/structure.html$ ^/contributor/structure.html/ redirectmatch 301 ^/barbican/([^/]+)/testing.html$ ^/contributor/testing.html barbican-6.0.0/doc/source/install/0000775000175100017510000000000013245511177017023 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/install/install-rdo.rst0000666000175100017510000000272313245511001021775 0ustar zuulzuul00000000000000.. _install-rdo: Install and configure for Red Hat Enterprise Linux and CentOS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Key Manager service for Red Hat Enterprise Linux 7 and CentOS 7. .. include:: common_prerequisites.rst Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # yum install openstack-barbican-api .. include:: common_configure.rst Finalize installation --------------------- #. Create the ``/etc/httpd/conf.d/wsgi-barbican.conf`` file with the following content: .. code-block:: apache ServerName controller ## Logging ErrorLog "/var/log/httpd/barbican_wsgi_main_error_ssl.log" LogLevel debug ServerSignature Off CustomLog "/var/log/httpd/barbican_wsgi_main_access_ssl.log" combined WSGIApplicationGroup %{GLOBAL} WSGIDaemonProcess barbican-api display-name=barbican-api group=barbican processes=2 threads=8 user=barbican WSGIProcessGroup barbican-api WSGIScriptAlias / "/usr/lib/python2.7/site-packages/barbican/api/app.wsgi" WSGIPassAuthorization On #. Start the Apache HTTP service and configure it to start when the system boots: .. code-block:: console # systemctl enable httpd.service # systemctl start httpd.service barbican-6.0.0/doc/source/install/common_configure.rst0000666000175100017510000000445613245511001023103 0ustar zuulzuul000000000000002. Edit the ``/etc/barbican/barbican.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, configure database access: .. code-block:: none [DEFAULT] ... sql_connection = mysql+pymysql://barbican:BARBICAN_DBPASS@controller/barbican Replace ``BARBICAN_DBPASS`` with the password you chose for the Key Manager service database. * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. code-block:: ini [DEFAULT] ... transport_url = rabbit://openstack:RABBIT_PASS@controller Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[keystone_authtoken]`` section, configure Identity service access: .. code-block:: ini [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = barbican password = BARBICAN_PASS Replace ``BARBICAN_PASS`` with the password you chose for the ``barbican`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. #. Populate the Key Manager service database: The Key Manager service database will be automatically populated when the service is first started. To prevent this, and run the database sync manually, edit the ``/etc/barbican/barbican.conf`` file and set db_auto_create in the ``[DEFAULT]`` section to False. Then populate the database as below: .. code-block:: console $ su -s /bin/sh -c "barbican-manage db upgrade" barbican .. note:: Ignore any deprecation messages in this output. #. Barbican has a plugin architecture which allows the deployer to store secrets in a number of different back-end secret stores. By default, Barbican is configured to store secrets in a basic file-based keystore. This key store is NOT safe for production use. For a list of supported plugins and detailed instructions on how to configure them, see :ref:`barbican_backend` barbican-6.0.0/doc/source/install/install.rst0000666000175100017510000000107413245511001021211 0ustar zuulzuul00000000000000.. _install: Install and configure ~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Key Manager service, code-named barbican, on the controller node. This section assumes that you already have a working OpenStack environment with at least the Identity Service (keystone) installed. For simplicity, this configuration stores secrets on the local file system. Note that installation and configuration vary by distribution. .. toctree:: :maxdepth: 2 install-obs.rst install-rdo.rst install-ubuntu.rst barbican-backend.rst barbican-6.0.0/doc/source/install/get_started.rst0000666000175100017510000000063213245511001022047 0ustar zuulzuul00000000000000============================ Key Manager service overview ============================ The Key Manager service provides secure storage, provisioning and management of secrets, such as passwords, encryption keys, etc. The Key Manager service consists of the following components: ``barbican-api`` service Provides an OpenStack-native RESTful API that supports provisioning and managing Barbican secrets. barbican-6.0.0/doc/source/install/install-ubuntu.rst0000666000175100017510000000117613245511001022534 0ustar zuulzuul00000000000000.. _install-ubuntu: Install and configure for Ubuntu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Key Manager service for Ubuntu 14.04 (LTS). .. include:: common_prerequisites.rst Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # apt-get update # apt-get install barbican-api barbican-keystone-listener barbican-worker .. include:: common_configure.rst Finalize installation --------------------- Restart the Key Manager services: .. code-block:: console # service openstack-barbican-api restart barbican-6.0.0/doc/source/install/verify.rst0000666000175100017510000001011113245511001021037 0ustar zuulzuul00000000000000.. _verify: Verify operation ~~~~~~~~~~~~~~~~ Verify operation of the Key Manager (barbican) service. .. note:: Perform these commands on the controller node. #. Install python-barbicanclient package: * For openSUSE and SUSE Linux Enterprise: .. code-block:: console $ zypper install python-barbicanclient * For Red Hat Enterprise Linux and CentOS: .. code-block:: console $ yum install python-barbicanclient * For Ubuntu: .. code-block:: console $ apt-get install python-barbicanclient #. Source the ``admin`` credentials to be able to perform Barbican API calls: .. code-block:: console $ . admin-openrc #. Use the OpenStack CLI to store a secret: .. code-block:: console $ openstack secret store --name mysecret --payload j4=]d21 +---------------+-----------------------------------------------------------------------+ | Field | Value | +---------------+-----------------------------------------------------------------------+ | Secret href | http://10.0.2.15:9311/v1/secrets/655d7d30-c11a-49d9-a0f1-34cdf53a36fa | | Name | mysecret | | Created | None | | Status | None | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+-----------------------------------------------------------------------+ #. Confirm that the secret was stored by retrieving it: .. code-block:: console $ openstack secret get http://10.0.2.15:9311/v1/secrets/655d7d30-c11a-49d9-a0f1-34cdf53a36fa +---------------+-----------------------------------------------------------------------+ | Field | Value | +---------------+-----------------------------------------------------------------------+ | Secret href | http://10.0.2.15:9311/v1/secrets/655d7d30-c11a-49d9-a0f1-34cdf53a36fa | | Name | mysecret | | Created | 2016-08-16 16:04:10+00:00 | | Status | ACTIVE | | Content types | {u'default': u'application/octet-stream'} | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+-----------------------------------------------------------------------+ .. note:: Some items are populated after the secret has been created and will only display when retrieving it. #. Confirm that the secret payload was stored by retrieving it: .. code-block:: console $ openstack secret get http://10.0.2.15:9311/v1/secrets/655d7d30-c11a-49d9-a0f1-34cdf53a36fa --payload +---------+---------+ | Field | Value | +---------+---------+ | Payload | j4=]d21 | +---------+---------+ barbican-6.0.0/doc/source/install/barbican-backend.rst0000666000175100017510000001404413245511001022672 0ustar zuulzuul00000000000000.. _barbican_backend: Secret Store Back-ends ~~~~~~~~~~~~~~~~~~~~~~ The Key Manager service has a plugin architecture that allows the deployer to store secrets in one or more secret stores. Secret stores can be software-based such as a software token, or hardware devices such as a hardware security module (HSM). This section describes the plugins that are currently available and how they might be configured. Crypto Plugins -------------- These types of plugins store secrets as encrypted blobs within the Barbican database. The plugin is invoked to encrypt the secret on secret storage, and decrypt the secret on secret retrieval. To enable these plugins, add ``store_crypto`` to the list of enabled secret store plugins in the ``[secret_store]`` section of ``/etc/barbican/barbican.conf`` : .. code-block:: ini [secretstore] namespace = barbican.secretstore.plugin enabled_secretstore_plugins = store_crypto There are two flavors of storage plugins currently available: the Simple Crypto plugin and the PKCS#11 crypto plugin. Simple Crypto Plugin ^^^^^^^^^^^^^^^^^^^^ This crypto plugin is configured by default in ``/etc/barbican/barbican.conf``. This plugin is completely insecure and is only suitable for development testing. This plugin uses single symmetric key (kek - or 'key encryption key') - which is stored in plain text in the ``/etc/barbican/barbican.conf`` file to encrypt and decrypt all secrets. The configuration for this plugin in ``/etc/barbican/barbican.conf`` is as follows: .. code-block:: ini # ================= Secret Store Plugin =================== [secretstore] .. enabled_secretstore_plugins = store_crypto # ================= Crypto plugin =================== [crypto] .. enabled_crypto_plugins = simple_crypto [simple_crypto_plugin] # the kek should be a 32-byte value which is base64 encoded kek = 'YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=' PKCS#11 Crypto Plugin ^^^^^^^^^^^^^^^^^^^^^ This crypto plugin can be used to interface with a Hardware Security Module (HSM) using the PKCS#11 protocol. Secrets are encrypted (and decrypted on retrieval) by a project specific Key Encryption Key (KEK), which resides in the HSM. The configuration for this plugin in ``/etc/barbican/barbican.conf`` with settings shown for use with a SafeNet HSM is as follows: .. code-block:: ini # ================= Secret Store Plugin =================== [secretstore] .. enabled_secretstore_plugins = store_crypto [p11_crypto_plugin] # Path to vendor PKCS11 library library_path = '/usr/lib/libCryptoki2_64.so' # Password to login to PKCS11 session login = 'mypassword' # Label to identify master KEK in the HSM (must not be the same as HMAC label) mkek_label = 'an_mkek' # Length in bytes of master KEK mkek_length = 32 # Label to identify HMAC key in the HSM (must not be the same as MKEK label) hmac_label = 'my_hmac_label' # HSM Slot id (Should correspond to a configured PKCS11 slot). Default: 1 # slot_id = 1 # Enable Read/Write session with the HSM? # rw_session = True # Length of Project KEKs to create # pkek_length = 32 # How long to cache unwrapped Project KEKs # pkek_cache_ttl = 900 # Max number of items in pkek cache # pkek_cache_limit = 100 KMIP Plugin ----------- This secret store plugin is used to communicate with a KMIP device. The secret is securely stored in the KMIP device directly, rather than in the Barbican database. The Barbican database maintains a reference to the secret's location for later retrieval. The plugin can be configured to authenticate to the KMIP device using either a username and password, or using a client certificate. The configuration for this plugin in ``/etc/barbican/barbican.conf`` is as follows: .. code-block:: ini [secretstore] .. enabled_secretstore_plugins = kmip_crypto [kmip_plugin] username = 'admin' password = 'password' host = localhost port = 5696 keyfile = '/path/to/certs/cert.key' certfile = '/path/to/certs/cert.crt' ca_certs = '/path/to/certs/LocalCA.crt' Dogtag Plugin ------------- Dogtag is the upstream project corresponding to the Red Hat Certificate System, a robust, full-featured PKI solution that contains a Certificate Manager (CA) and a Key Recovery Authority (KRA) which is used to securely store secrets. The KRA stores secrets as encrypted blobs in its internal database, with the master encryption keys being stored either in a software-based NSS security database, or in a Hardware Security Module (HSM). Note that the software-based NSS database configuration provides a secure option for those deployments that do not require or cannot afford an HSM. This is the only current plugin to provide this option. The KRA communicates with HSMs using PKCS#11. For a list of certified HSMs, see the latest `release notes `_. Dogtag and the KRA meet all the relevant Common Criteria and FIPS specifications. The KRA is a component of FreeIPA. Therefore, it is possible to configure the plugin with a FreeIPA server. More detailed instructions on how to set up Barbican with FreeIPA are provided `here `_. The plugin communicates with the KRA using a client certificate for a trusted KRA agent. That certificate is stored in an NSS database as well as a PEM file as seen in the configuration below. The configuration for this plugin in ``/etc/barbican/barbican.conf`` is as follows: .. code-block:: ini [secretstore] .. enabled_secretstore_plugins = dogtag_crypto [dogtag_plugin] pem_path = '/etc/barbican/kra_admin_cert.pem' dogtag_host = localhost dogtag_port = 8443 nss_db_path = '/etc/barbican/alias' nss_password = 'password123' barbican-6.0.0/doc/source/install/common_prerequisites.rst0000666000175100017510000000430213245511001024014 0ustar zuulzuul00000000000000Prerequisites ------------- Before you install and configure the Key Manager service, you must create a database, service credentials, and API endpoints. #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p * Create the ``barbican`` database: .. code-block:: console CREATE DATABASE barbican; * Grant proper access to the ``barbican`` database: .. code-block:: console GRANT ALL PRIVILEGES ON barbican.* TO 'barbican'@'localhost' \ IDENTIFIED BY 'BARBICAN_DBPASS'; GRANT ALL PRIVILEGES ON barbican.* TO 'barbican'@'%' \ IDENTIFIED BY 'BARBICAN_DBPASS'; Replace ``BARBICAN_DBPASS`` with a suitable password. * Exit the database access client. .. code-block:: console exit; #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ source admin-openrc #. To create the service credentials, complete these steps: * Create the ``barbican`` user: .. code-block:: console $ openstack user create --domain default --password-prompt barbican * Add the ``admin`` role to the ``barbican`` user: .. code-block:: console $ openstack role add --project service --user barbican admin * Create the ``creator`` role: .. code-block:: console $ openstack role create creator * Add the ``creator`` role to the ``barbican`` user: .. code-block:: console $ openstack role add --project service --user barbican creator * Create the barbican service entities: .. code-block:: console $ openstack service create --name barbican --description "Key Manager" key-manager #. Create the Key Manager service API endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ key-manager public http://controller:9311 $ openstack endpoint create --region RegionOne \ key-manager internal http://controller:9311 $ openstack endpoint create --region RegionOne \ key-manager admin http://controller:9311 barbican-6.0.0/doc/source/install/next-steps.rst0000666000175100017510000000025313245511001021653 0ustar zuulzuul00000000000000.. _next-steps: Next steps ~~~~~~~~~~ Your OpenStack environment now includes the barbican service. To add additional services, see docs.openstack.org/install-guide . barbican-6.0.0/doc/source/install/install-obs.rst0000666000175100017510000000175113245511001021774 0ustar zuulzuul00000000000000.. _install-obs: Install and configure for openSUSE and SUSE Linux Enterprise ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Key Manager service for openSUSE Leap 42.2 and SUSE Linux Enterprise Server 12 SP2. .. include:: common_prerequisites.rst Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # zypper install openstack-barbican-api openstack-barbican-keystone-listener openstack-barbican-worker .. include:: common_configure.rst Finalize installation --------------------- #. Copy the sample Apache vhost file into place: .. code-block:: console # cp /etc/apache2/conf.d/barbican-api.conf.sample /etc/apache2/vhosts.d/barbican-api.conf #. Start the Apache HTTP service and configure it to start when the system boots: .. code-block:: console # systemctl enable apache2.service # systemctl start apache2.service barbican-6.0.0/doc/source/install/index.rst0000666000175100017510000000100513245511001020644 0ustar zuulzuul00000000000000===================== Key Manager service ===================== .. toctree:: :maxdepth: 2 get_started.rst install.rst verify.rst next-steps.rst The Key Manager service (barbican) provides secure storage, provisioning and management of secret data. This includes keying material such as symmetric keys, asymmetric keys, certificates and raw binary data. This chapter assumes a working setup of OpenStack following the `OpenStack Installation Tutorial `_. barbican-6.0.0/doc/source/admin/0000775000175100017510000000000013245511177016445 5ustar zuulzuul00000000000000barbican-6.0.0/doc/source/admin/upgrade.rst0000666000175100017510000000430113245511001020610 0ustar zuulzuul00000000000000================================= Key Manager Service Upgrade Guide ================================= This document outlines several steps and notes for operators to reference when upgrading their barbican from previous versions of OpenStack. Plan to Upgrade =============== * The `release notes `_ should be read carefully before upgrading the barbican services. Starting with the Mitaka release, specific upgrade steps and considerations are well-documented in the release notes. * Upgrades are only supported between sequential releases. * When upgrading barbican, the following steps should be followed: #. Destroy all barbican services #. Upgrade source code to the next release #. Upgrade barbican database to the next release .. code-block:: bash barbican-db-manage upgrade #. Start barbican services Upgrade from Newton to Ocata ============================ The barbican-api-paste.ini configuration file for the paste pipeline was updated to add the http_proxy_to_wsgi middleware. It can be used to help barbican respond with the correct URL refs when it’s put behind a TLS proxy (such as HAProxy). This middleware is disabled by default, but can be enabled via a configuration option in the oslo_middleware group. See `Ocata release notes `_. Upgrade from Mitaka to Newton ============================= There are no extra instructions that should be noted for this upgrade. See `Newton release notes `_. Upgrade from Liberty to Mitaka ============================== The Metadata API requires an update to the Database Schema. Existing deployments that are being upgraded to Mitaka should use the ‘barbican-manage’ utility to update the schema. If you are upgrading from previous version of barbican that uses the PKCS#11 Cryptographic Plugin driver, you will need to run the migration script. .. code-block:: bash python barbican/cmd/pkcs11_migrate_kek_signatures.py See `Mitaka release notes `_. barbican-6.0.0/doc/source/admin/database_cleaning.rst0000666000175100017510000000561513245511001022576 0ustar zuulzuul00000000000000Database Cleaning ================= Entries in the Barbican database are soft deleted and can build up over time. These entries can be cleaned up with the clean up command. The command can be used with a cron job to clean the database automatically on intervals. Commands -------- The command ```barbican-manage db clean``` can be used to clean up the database. By default, it will remove soft deletions that are at least 90 days old since deletion ```barbican-manage db clean --min-days 180``` (```-m```) will go through the database and remove soft deleted entries that are at least 90 days old since deletion. The default value is 90 days. Passing a value of ```--min-days 0``` will delete all soft-deleted entries up to today. ```barbican-manage db clean --clean-unassociated-projects``` (```-p```) will go through the database and remove projects that have no associated resources. The default value is False. ```barbican-manage db clean --soft-delete-expired-secrets``` (```-e```) will go through the database and soft delete any secrets that are past their expiration date. The default value is False. If ```-e``` is used along with ```---min-days 0``` then all the expired secrets will be hard deleted. ```barbican-manage db clean --verbose``` (```-V```) will print more information out into the terminal. ```barbican-manage db clean --log-file``` (```-L```) will set the log file location. The creation of the log may fail if the user running the command does not have access to the log file location or if the target directory does not exist. The default value for log_file can be found in ```/etc/barbican/barbican.conf``` The log will contain the verbose output from the command. Cron Job -------- A cron job can be created on linux systems to run at a given interval to clean the barbican database. Crontab ''''''' 1. Start the crontab editor ```crontab -e``` with the user that runs the clean up command 2. Edit the crontab section to run the command at a given interval. ``` clean up command``` Crontab Examples '''''''''''''''' ```00 00 * * * barbican-manage db clean -p -e``` -Runs a job everyday at midnight which will remove soft deleted entries that 90 days old since soft deletion, will clean unassociated projects, and will soft delete secrets that are expired. ```00 03 01 * * barbican-manage db clean -m 30``` -Runs a job every month at 3AM which will remove soft deleted entries that are at least 30 days old since deletion. ```05 01 07 * 6 barbican-manage db clean -m 180 -p -e -L /tmp/barbican-clean-command.log``` -Runs a job every month at 1:05AM on the 7th day of the month and every Saturday. Entries that are 180 days old since soft deletion will be removed from the database. Unassociated projects will be removed. Expired secrets will be soft deleted. The log file will be saved to ```/tmp/barbican-clean-command.log``` barbican-6.0.0/doc/source/admin/barbican_manage.rst0000666000175100017510000000626113245511001022241 0ustar zuulzuul00000000000000=================================== Barbican Service Management Utility =================================== Description =========== ``barbican-manage`` is a utility that is used to control the barbican key manager service database and Hardware Secure Module (HSM) plugin device. Use cases include migrating the secret database or generating a Master Key Encryption Key (MKEK) in the HSM. This command set should only be executed by a user with admin privileges. Options ======= The standard pattern for executing a barbican-manage command is: ``barbican-manage []`` Running ``barbican-manage`` without arguments shows a list of available command categories. Currently, there are 2 supported categories: *db* and *hsm*. Running with a category argument shows a list of commands in that category: * ``barbican-manage db --help`` * ``barbican-manage hsm --help`` * ``barbican-manage --version`` shows the version number of barbican service. The following sections describe the available categories and arguments for barbican-manage. Barbican Database ~~~~~~~~~~~~~~~~~ .. Warning:: Before executing **barbican-manage db** commands, make sure you are familiar with `Database Migration`_ first. ``barbican-manage db revision [--db-url] [--message] [--autogenerate]`` Create a new database version file. ``barbican-manage db upgrade [--db-url] [--version]`` Upgrade to a future version database. ``barbican-manage db history [--db-url] [--verbose]`` Show database changeset history. ``barbican-manage db current [--db-url] [--verbose]`` Show current revision of database. ``barbican-manage db clean [--db-url] [--verbose] [--min-days] [--clean-unassociated-projects] [--soft-delete-expired-secrets] [--log-file]`` Clean up soft deletions in the database. More documentation can be found here: :doc:`Database Cleaning ` ``barbican-manage db sync_secret_stores [--db-url] [--verbose] [--log-file]`` Synchronize the secret_store database table with the configuration in barbican.conf. This is useful when multiple secret stores are enabled and new secret stores have been enabled. Barbican PKCS11/HSM ~~~~~~~~~~~~~~~~~~~ ``barbican-manage hsm gen_mkek [--library-path] [--passphrase] [--slot-id] [--label] [--length]`` Create a new Master key encryption key in HSM. This MKEK will be used to encrypt all project key encryption keys. Its label must be unique. ``barbican-manage hsm gen_hmac [--library-path] [--passphrase] [--slot-id] [--label] [--length]`` Create a new Master HMAC key in HSM. This HMAC key will be used to generate an authentication tag of encrypted project key encryption keys. Its label must be unique. ``barbican-manage hsm rewrap_pkek [--dry-run]`` Rewrap project key encryption keys after rotating to new MKEK and/or HMAC key(s) in HSM. The new MKEK and HMAC key should have already been generated using the above commands. The user will have to configure new MKEK and HMAC key labels in /etc/barbican.conf and restart barbican server before executing this command. .. _Database Migration: https://docs.openstack.org/barbican/latest/contributor/database_migrations.html barbican-6.0.0/doc/source/admin/index.rst0000666000175100017510000000066613245511001020302 0ustar zuulzuul00000000000000=============================================== Cloud Administrator Guide - Key Manager service =============================================== The Key Manager service, code-named Barbican, is the default secret storage service for OpenStack. The service provides secure storage, provisioning and management of secrets. .. toctree:: :maxdepth: 1 access_control.rst barbican_manage.rst database_cleaning.rst upgrade.rst barbican-6.0.0/doc/source/admin/access_control.rst0000666000175100017510000000630113245511001022164 0ustar zuulzuul00000000000000============== Access Control ============== Role Based Access Control (RBAC) -------------------------------- Like many other services, the Key Manager service supports the protection of its APIs by enforcing policy rules defined in a policy file. The Key Manager service stores a reference to a policy JSON file in its configuration file, :file:`/etc/barbican/barbican.conf`. Typically this file is named ``policy.json`` and it is stored in :file:`/etc/barbican/policy.json`. Each Key Manager API call has a line in the policy file that dictates which level of access applies: .. code-block:: ini API_NAME: RULE_STATEMENT or MATCH_STATEMENT where ``RULE_STATEMENT`` can be another ``RULE_STATEMENT`` or a ``MATCH_STATEMENT``: .. code-block:: ini RULE_STATEMENT: RULE_STATEMENT or MATCH_STATEMENT ``MATCH_STATEMENT`` is a set of identifiers that must match between the token provided by the caller of the API and the parameters or target entities of the API in question. For example: .. code-block:: ini "secrets:post": "role:admin or role:creator" indicates that to create a new secret via a POST request, you must have either the admin or creator role in your token. .. warning:: The Key Manager service scopes the ownership of a secret at the project level. This means that many calls in the API will perform an additional check to ensure that the project_id of the token matches the project_id stored as the secret owner. Default Policy ~~~~~~~~~~~~~~ The policy engine in OpenStack is very flexible and allows for customized policies that make sense for your particular cloud. The Key Manager service comes with a sample ``policy.json`` file which can be used as the starting point for a customized policy. The sample policy defines 5 distinct roles: key-manager:service-admin The cloud administrator in charge of the Key Manager service. This user has access to all management APIs like the project-quotas. admin Project administrator. This user has full access to all resources owned by the project for which the admin role is scoped. creator Users with this role are allowed to create new resources and can only delete resources which are originally created (owned) by them. Users with this role cannot delete other user's resources managed within same project. They are also allowed full access to existing secrets owned by the project in scope. observer Users with this role are allowed to access to existing resources but are not allowed to upload new secrets or delete existing secrets. audit Users with this role are only allowed access to the resource metadata. So users with this role are unable to decrypt secrets. Access Control List API ----------------------- There are some limitations that result from scoping ownership of a secret at the project level. For example, there is no easy way for a user to upload a secret for which only they have access. There is also no easy way to grant a user access to only a single secret. To address this limitations the Key Manager service includes an Access Control List (ACL) API. For full details see the `ACL API User Guide `__ barbican-6.0.0/playbooks/0000775000175100017510000000000013245511177015313 5ustar zuulzuul00000000000000barbican-6.0.0/playbooks/legacy/0000775000175100017510000000000013245511177016557 5ustar zuulzuul00000000000000barbican-6.0.0/playbooks/legacy/grenade-devstack-barbican/0000775000175100017510000000000013245511177023525 5ustar zuulzuul00000000000000barbican-6.0.0/playbooks/legacy/grenade-devstack-barbican/post.yaml0000666000175100017510000000063313245511001025364 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs barbican-6.0.0/playbooks/legacy/grenade-devstack-barbican/run.yaml0000666000175100017510000000436013245511001025204 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job legacy-grenade-dsvm-barbican from old job gate-grenade-dsvm-barbican-ubuntu-xenial tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack-infra/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ git://git.openstack.org \ openstack-infra/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export PROJECTS="openstack/barbican $PROJECTS" export PROJECTS="openstack-dev/grenade $PROJECTS" export PROJECTS="openstack/python-barbicanclient $PROJECTS" export PROJECTS="openstack/barbican-tempest-plugin $PROJECTS" export GRENADE_PLUGINRC="enable_grenade_plugin barbican https://git.openstack.org/openstack/barbican" export DEVSTACK_LOCAL_CONFIG+=$'\n'"export TEMPEST_PLUGINS='/opt/stack/new/barbican-tempest-plugin'" export DEVSTACK_GATE_TEMPEST=1 export DEVSTACK_GATE_GRENADE=pullup export DEVSTACK_GATE_TEMPEST_REGEX=barbican export BRANCH_OVERRIDE=default if [ "$BRANCH_OVERRIDE" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi # Add configuration values for enabling security features in local.conf function pre_test_hook { if [ -f /opt/stack/old/barbican-tempest-plugin/tools/pre_test_hook.sh ] ; then . /opt/stack/old/barbican-tempest-plugin/tools/pre_test_hook.sh fi } export -f pre_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' barbican-6.0.0/playbooks/legacy/barbican-devstack-tempest-base/0000775000175100017510000000000013245511177024511 5ustar zuulzuul00000000000000barbican-6.0.0/playbooks/legacy/barbican-devstack-tempest-base/post.yaml0000666000175100017510000000063313245511001026350 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs barbican-6.0.0/playbooks/legacy/barbican-devstack-tempest-base/run.yaml0000666000175100017510000000550113245511001026166 0ustar zuulzuul00000000000000- hosts: all name: Barbican devstack tempest base tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack-infra/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ git://git.openstack.org \ openstack-infra/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_TEMPEST=1 export DEVSTACK_GATE_TEMPEST_REGEX=barbican export KEEP_LOCALRC=1 export PROJECTS="openstack/barbican $PROJECTS" export PROJECTS="openstack/python-barbicanclient $PROJECTS" export PROJECTS="openstack/barbican-tempest-plugin $PROJECTS" export DEVSTACK_LOCAL_CONFIG="enable_plugin barbican git://git.openstack.org/openstack/barbican" export DEVSTACK_LOCAL_CONFIG+=$'\n'"export TEMPEST_PLUGINS='/opt/stack/new/barbican-tempest-plugin'" export BRANCH_OVERRIDE=default if [ "$BRANCH_OVERRIDE" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi # Add configuration values for enabling security features in local.conf function pre_test_hook { if [ -f $BASE/new/barbican-tempest-plugin/tools/pre_test_hook.sh ] ; then . $BASE/new/barbican-tempest-plugin/tools/pre_test_hook.sh fi } export -f pre_test_hook if [ "{{ database }}" == "postgres" ] ; then export DEVSTACK_GATE_POSTGRES=1 elif [ "{{ castellan_from_git }}" == "1" ] ; then export DEVSTACK_PROJECT_FROM_GIT="castellan" elif [ "{{ cursive }}" == "1" ] ; then export DEVSTACK_PROJECT_FROM_GIT="cursive" elif [ "{{ python_version }}" == "py35" ] ; then export DEVSTACK_GATE_USE_PYTHON3=True export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-account" export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-container" export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-object" export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-proxy" fi cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' barbican-6.0.0/playbooks/legacy/barbican-devstack-functional-base/0000775000175100017510000000000013245511177025172 5ustar zuulzuul00000000000000barbican-6.0.0/playbooks/legacy/barbican-devstack-functional-base/post.yaml0000666000175100017510000000063313245511001027031 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs barbican-6.0.0/playbooks/legacy/barbican-devstack-functional-base/run.yaml0000666000175100017510000000440513245511001026651 0ustar zuulzuul00000000000000- hosts: all name: Barbican devstack functional base tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack-infra/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ git://git.openstack.org \ openstack-infra/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin barbican git://git.openstack.org/openstack/barbican EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export OVERRIDE_ENABLED_SERVICES="{{ services }}" export PROJECTS="openstack/barbican $PROJECTS" export PROJECTS="openstack/python-barbicanclient $PROJECTS" export PROJECTS="openstack/barbican-tempest-plugin $PROJECTS" if [ "{{ python_version }}" == "py35" ] ; then export DEVSTACK_GATE_USE_PYTHON3=True else export DEVSTACK_GATE_USE_PYTHON3=False fi function gate_hook { $BASE/new/barbican/devstack/gate_hook.sh } export -f gate_hook function post_test_hook { cd /opt/stack/new/barbican/functionaltests ./post_test_hook.sh "{{plugin}}" } export -f post_test_hook if [ "{{ database }}" == "postgres" ] ; then export DEVSTACK_GATE_POSTGRES=1 fi cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' barbican-6.0.0/playbooks/legacy/barbican-devstack-base/0000775000175100017510000000000013245511177023032 5ustar zuulzuul00000000000000barbican-6.0.0/playbooks/legacy/barbican-devstack-base/post.yaml0000666000175100017510000000063313245511001024671 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs barbican-6.0.0/playbooks/legacy/barbican-devstack-base/run.yaml0000666000175100017510000000444513245511001024515 0ustar zuulzuul00000000000000- hosts: all name: Barbican devstack base tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack-infra/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ git://git.openstack.org \ openstack-infra/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export ENABLED_SERVICES="{{ services }}" export PROJECTS="openstack/barbican $PROJECTS" export PROJECTS="openstack/python-barbicanclient $PROJECTS" export PROJECTS="openstack/barbican-tempest-plugin $PROJECTS" export DEVSTACK_LOCAL_CONFIG="enable_plugin barbican git://git.openstack.org/openstack/barbican" if [ "{{ python_version }}" == "py35" ] ; then export DEVSTACK_GATE_USE_PYTHON3=True export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-account" export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-container" export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-object" export DEVSTACK_LOCAL_CONFIG+=$'\n'"disable_service s-proxy" else export DEVSTACK_GATE_USE_PYTHON3=False fi function gate_hook { $BASE/new/barbican/devstack/gate_hook.sh } export -f gate_hook function post_test_hook { cd /opt/stack/new/barbican/functionaltests ./post_test_hook.sh "{{ plugin }}" } export -f post_test_hook if [ "{{ database }}" == "postgres" ] ; then export DEVSTACK_GATE_POSTGRES=1 fi cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' barbican-6.0.0/devstack/0000775000175100017510000000000013245511177015114 5ustar zuulzuul00000000000000barbican-6.0.0/devstack/gate_hook.sh0000777000175100017510000000116513245511001017402 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. set -ex export FORCE=yes $BASE/new/devstack-gate/devstack-vm-gate.sh barbican-6.0.0/devstack/README.md0000666000175100017510000000125513245511001016362 0ustar zuulzuul00000000000000This directory contains the Barbican DevStack plugin. To configure Barbican with DevStack, you will need to enable this plugin and the Barbican service by adding one line to the [[local|localrc]] section of your local.conf file. To enable the plugin, add a line of the form: enable_plugin barbican [GITREF] where is the URL of a Barbican repository [GITREF] is an optional git ref (branch/ref/tag). The default is master. For example enable_plugin barbican https://git.openstack.org/openstack/barbican stable/liberty For more information, see the "Externally Hosted Plugins" section of https://docs.openstack.org/devstack/latest/plugins.html barbican-6.0.0/devstack/local.conf.example0000666000175100017510000000100013245511001020462 0ustar zuulzuul00000000000000[[local|localrc]] disable_all_services enable_plugin barbican https://git.openstack.org/openstack/barbican # To use a specific branch: # enable_plugin barbican https://git.openstack.org/openstack/barbican stable/ enable_service rabbit mysql key # This is to keep the token small for testing KEYSTONE_TOKEN_FORMAT=UUID # Modify passwords as needed DATABASE_PASSWORD=secretdatabase RABBIT_PASSWORD=secretrabbit ADMIN_PASSWORD=secretadmin SERVICE_PASSWORD=secretservice SERVICE_TOKEN=111222333444 barbican-6.0.0/devstack/settings0000666000175100017510000000274113245511001016667 0ustar zuulzuul00000000000000# Defaults # -------- # Set up default directories BARBICAN_DIR=$DEST/barbican BARBICANCLIENT_DIR=$DEST/python-barbicanclient BARBICAN_CONF_DIR=${BARBICAN_CONF_DIR:-/etc/barbican} BARBICAN_CONF=$BARBICAN_CONF_DIR/barbican.conf BARBICAN_PASTE_CONF=$BARBICAN_CONF_DIR/barbican-api-paste.ini BARBICAN_API_LOG_DIR=$DEST/logs BARBICAN_AUTH_CACHE_DIR=${BARBICAN_AUTH_CACHE_DIR:-/var/cache/barbican} PYKMIP_CONF_DIR=${PYKMIP_CONF_DIR:-/etc/pykmip} PYKMIP_CONF=${PYKMIP_CONF_DIR}/server.conf PYKMIP_LOG_DIR=${PYKMIP_LOG_DIR:-/var/log/pykmip} # Support potential entry-points console scripts BARBICAN_BIN_DIR=$(get_python_exec_prefix) # WSGI variables BARBICAN_WSGI=$BARBICAN_BIN_DIR/barbican-wsgi-api BARBICAN_UWSGI_CONF=$BARBICAN_CONF_DIR/barbican-uwsgi.ini # Set Barbican repository BARBICAN_REPO=${BARBICAN_REPO:-${GIT_BASE}/openstack/barbican.git} BARBICAN_BRANCH=${BARBICAN_BRANCH:-master} # Set client library repository BARBICANCLIENT_REPO=${BARBICANCLIENT_REPO:-${GIT_BASE}/openstack/python-barbicanclient.git} BARBICANCLIENT_BRANCH=${BARBICANCLIENT_BRANCH:-master} # Set host href BARBICAN_HOST_HREF=${BARBICAN_HOST_HREF:-http://${SERVICE_HOST}/key-manager} # Tell Tempest this project is present TEMPEST_SERVICES+=,barbican GITREPO["barbican-tempest-plugin"]=${BARBICANTEMPEST_REPO:-${GIT_BASE}/openstack/barbican-tempest-plugin.git} GITBRANCH["barbican-tempest-plugin"]=${BARBICANTEMPEST_BRANCH:-master} GITDIR["barbican-tempest-plugin"]=$DEST/barbican-tempest-plugin enable_service barbican barbican-6.0.0/devstack/barbican-vagrant/0000775000175100017510000000000013245511177020315 5ustar zuulzuul00000000000000barbican-6.0.0/devstack/barbican-vagrant/install_devstack.sh0000666000175100017510000000067113245511001024173 0ustar zuulzuul00000000000000#!/bin/bash export DEBIAN_FRONTEND=noninteractive sudo apt-get update sudo apt-get install -y python-pip python-dev libffi-dev libssl-dev git git clone https://github.com/openstack-dev/devstack.git git clone https://github.com/openstack/barbican.git cp barbican/devstack/local.conf.example devstack/local.conf sudo cp -R devstack/ /opt/stack/ sudo chown -R vagrant:vagrant /opt/stack/ echo "export SERVICE_HOST=\"localhost\"" >> .bashrcbarbican-6.0.0/devstack/barbican-vagrant/Vagrantfile0000666000175100017510000000122013245511001022461 0ustar zuulzuul00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "ubuntu/trusty64" # Barbican Ports config.vm.network "forwarded_port", guest: 9311, host: 9311 # Keystone Ports config.vm.network "forwarded_port", guest: 35357, host: 35357 config.vm.network "forwarded_port", guest: 5000, host: 5000 config.vm.provision "shell", path: "install_devstack.sh" # Create Synced Folder config.vm.synced_folder "./devstack", "/opt/stack", create: true config.vm.provider "virtualbox" do |v| v.name = "Devstack" v.memory = 2048 v.cpus = 2 end end barbican-6.0.0/devstack/lib/0000775000175100017510000000000013245511177015662 5ustar zuulzuul00000000000000barbican-6.0.0/devstack/lib/barbican0000666000175100017510000004403513245511001017340 0ustar zuulzuul00000000000000#!/usr/bin/env bash # Install and start **Barbican** service # To enable a minimal set of Barbican features, add the following to localrc: # enable_service barbican-svc barbican-retry # # Dependencies: # - functions # - OS_AUTH_URL for auth in api # - DEST set to the destination directory # - SERVICE_PASSWORD, SERVICE_PROJECT_NAME for auth in api # - STACK_USER service user # stack.sh # --------- # install_barbican # configure_barbican # init_barbican # start_barbican # stop_barbican # cleanup_barbican # Save trace setting XTRACE=$(set +o | grep xtrace) set +o xtrace # PyKMIP configuration PYKMIP_SERVER_KEY=${PYKMIP_SERVER_KEY:-$INT_CA_DIR/private/pykmip-server.key} PYKMIP_SERVER_CERT=${PYKMIP_SERVER_CERT:-$INT_CA_DIR/pykmip-server.crt} PYKMIP_CLIENT_KEY=${PYKMIP_CLIENT_KEY:-$INT_CA_DIR/private/pykmip-client.key} PYKMIP_CLIENT_CERT=${PYKMIP_CLIENT_CERT:-$INT_CA_DIR/pykmip-client.crt} PYKMIP_CA_PATH=${PYKMIP_CA_PATH:-$INT_CA_DIR/ca-chain.pem} # Functions # --------- # TODO(john-wood-w) These 'magic' functions are called by devstack to enable # a given service (so the name between 'is_' and '_enabled'). Currently the # Zuul infra gate configuration (at https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/barbican.yaml) # only enables the 'barbican' service. So the two functions below, for the two # services we wish to run, have to key off of that lone 'barbican' selection. # Once the Zuul config is updated to add these two services properly, then # these functions should be replaced by the single method below. # !!!! Special thanks to rm_work for figuring this out !!!! function is_barbican-retry_enabled { [[ ,${ENABLED_SERVICES} =~ ,"barbican" ]] && return 0 } function is_barbican-svc_enabled { [[ ,${ENABLED_SERVICES} =~ ,"barbican" ]] && return 0 } # TODO(john-wood-w) Replace the above two functions with the one below once # Zuul is update per above. ## Test if any Barbican services are enabled ## is_barbican_enabled #function is_barbican_enabled { # [[ ,${ENABLED_SERVICES} =~ ,"barbican-" ]] && return 0 # return 1 #} # cleanup_barbican - Remove residual data files, anything left over from previous # runs that a clean run would need to clean up function cleanup_barbican { : } # configure_barbicanclient - Set config files, create data dirs, etc function configure_barbicanclient { setup_develop $BARBICANCLIENT_DIR } # configure_dogtag_plugin - Change config to use dogtag plugin function configure_dogtag_plugin { sudo openssl pkcs12 -in /root/.dogtag/pki-tomcat/ca_admin_cert.p12 -passin pass:PASSWORD -out $BARBICAN_CONF_DIR/kra_admin_cert.pem -nodes sudo chown $USER $BARBICAN_CONF_DIR/kra_admin_cert.pem iniset $BARBICAN_CONF dogtag_plugin dogtag_port 8373 iniset $BARBICAN_CONF dogtag_plugin pem_path "$BARBICAN_CONF_DIR/kra_admin_cert.pem" iniset $BARBICAN_CONF dogtag_plugin dogtag_host localhost iniset $BARBICAN_CONF dogtag_plugin nss_db_path '/etc/barbican/alias' iniset $BARBICAN_CONF dogtag_plugin nss_db_path_ca '/etc/barbican/alias-ca' iniset $BARBICAN_CONF dogtag_plugin nss_password 'password123' iniset $BARBICAN_CONF dogtag_plugin simple_cmc_profile 'caOtherCert' iniset $BARBICAN_CONF dogtag_plugin ca_expiration_time 1 iniset $BARBICAN_CONF dogtag_plugin plugin_working_dir '/etc/barbican/dogtag' iniset $BARBICAN_CONF secretstore enabled_secretstore_plugins dogtag_crypto iniset $BARBICAN_CONF certificate enabled_certificate_plugins dogtag } # configure_barbican - Set config files, create data dirs, etc function configure_barbican { setup_develop $BARBICAN_DIR [ ! -d $BARBICAN_CONF_DIR ] && sudo mkdir -m 755 -p $BARBICAN_CONF_DIR sudo chown $USER $BARBICAN_CONF_DIR [ ! -d $BARBICAN_API_LOG_DIR ] && sudo mkdir -m 755 -p $BARBICAN_API_LOG_DIR sudo chown $USER $BARBICAN_API_LOG_DIR [ ! -d $BARBICAN_CONF_DIR ] && sudo mkdir -m 755 -p $BARBICAN_CONF_DIR sudo chown $USER $BARBICAN_CONF_DIR # Copy the barbican config files to the config dir cp $BARBICAN_DIR/etc/barbican/barbican-api-paste.ini $BARBICAN_CONF_DIR cp -R $BARBICAN_DIR/etc/barbican/vassals $BARBICAN_CONF_DIR # Copy functional test config cp $BARBICAN_DIR/etc/barbican/barbican-functional.conf $BARBICAN_CONF_DIR # Enable DEBUG iniset $BARBICAN_CONF DEFAULT debug True # Set the host_href iniset $BARBICAN_CONF DEFAULT host_href "$BARBICAN_HOST_HREF" # Set the log file location iniset $BARBICAN_CONF DEFAULT log_file "$BARBICAN_API_LOG_DIR/barbican.log" # Enable logging to stderr to have log also in the screen window iniset $BARBICAN_CONF DEFAULT use_stderr True # Format logging if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then setup_colorized_logging $BARBICAN_CONF DEFAULT project user fi # Set the database connection url iniset $BARBICAN_CONF DEFAULT sql_connection `database_connection_url barbican` # Disable auto-migration when deploying Barbican iniset $BARBICAN_CONF DEFAULT db_auto_create False # Increase default request buffer size, keystone auth PKI tokens can be very long iniset $BARBICAN_CONF_DIR/vassals/barbican-api.ini uwsgi buffer-size 65535 # Rabbit settings if is_service_enabled rabbit; then iniset $BARBICAN_CONF 'secrets' broker rabbit://guest:$RABBIT_PASSWORD@$RABBIT_HOST else echo_summary "Barbican requires that the RabbitMQ service is enabled" fi write_uwsgi_config "$BARBICAN_UWSGI_CONF" "$BARBICAN_WSGI" "/key-manager" ## Set up keystone # Turn on the middleware iniset $BARBICAN_PASTE_CONF 'pipeline:barbican_api' pipeline 'barbican-api-keystone' # Set the keystone parameters configure_auth_token_middleware $BARBICAN_CONF barbican $BARBICAN_AUTH_CACHE_DIR } # init_barbican - Initialize etc. function init_barbican { # Create cache dir sudo mkdir -p $BARBICAN_AUTH_CACHE_DIR sudo chown $STACK_USER $BARBICAN_AUTH_CACHE_DIR rm -f $BARBICAN_AUTH_CACHE_DIR/* recreate_database barbican utf8 $BARBICAN_BIN_DIR/barbican-manage db upgrade -v head } # install_barbican - Collect source and prepare function install_barbican { # Install package requirements if is_fedora; then install_package sqlite-devel openldap-devel fi # TODO(ravips): We need this until barbican gets into devstack setup_develop $BARBICAN_DIR pip_install 'uwsgi' } # install_barbicanclient - Collect source and prepare function install_barbicanclient { git_clone $BARBICANCLIENT_REPO $BARBICANCLIENT_DIR $BARBICANCLIENT_BRANCH setup_develop $BARBICANCLIENT_DIR } # start_barbican - Start running processes, including screen function start_barbican { # Start the Barbican service up. run_process barbican-svc "$BARBICAN_BIN_DIR/uwsgi --ini $BARBICAN_UWSGI_CONF" # Pause while the barbican-svc populates the database, otherwise the retry # service below might try to do this at the same time, leading to race # conditions. sleep 10 # Start the retry scheduler server up. run_process barbican-retry "$BARBICAN_BIN_DIR/barbican-retry --config-file=$BARBICAN_CONF_DIR/barbican.conf" } # stop_barbican - Stop running processes function stop_barbican { # This will eventually be refactored to work like # Solum and Manila (script to kick off a wsgiref server) # For now, this will stop uWSGI rather than have it hang killall -9 uwsgi # This cleans up the PID file, but uses pkill so Barbican # uWSGI emperor process doesn't actually stop stop_process barbican-svc stop_process barbican-retry } function get_id { echo `"$@" | awk '/ id / { print $4 }'` } function create_barbican_accounts { # # Setup Default Admin User # SERVICE_PROJECT=$(openstack project list | awk "/ $SERVICE_PROJECT_NAME / { print \$2 }") ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }") BARBICAN_USER=$(openstack user create \ --password "$SERVICE_PASSWORD" \ --project $SERVICE_PROJECT \ --email "barbican@example.com" \ barbican \ | grep " id " | get_field 2) openstack role add --project $SERVICE_PROJECT \ --user $BARBICAN_USER \ $ADMIN_ROLE # # Setup Default service-admin User # SERVICE_ADMIN=$(get_id openstack user create \ --password "$SERVICE_PASSWORD" \ --email "service-admin@example.com" \ "service-admin") SERVICE_ADMIN_ROLE=$(get_id openstack role create \ "key-manager:service-admin") openstack role add \ --user "$SERVICE_ADMIN" \ --project "$SERVICE_PROJECT" \ "$SERVICE_ADMIN_ROLE" # # Setup RBAC User Projects and Roles # PASSWORD="barbican" PROJECT_A_ID=$(get_id openstack project create "project_a") PROJECT_B_ID=$(get_id openstack project create "project_b") ROLE_ADMIN_ID=$(get_id openstack role show admin) ROLE_CREATOR_ID=$(get_id openstack role create "creator") ROLE_OBSERVER_ID=$(get_id openstack role create "observer") ROLE_AUDIT_ID=$(get_id openstack role create "audit") # # Setup RBAC Admin of Project A # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "admin_a@example.net" \ "project_a_admin") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_ADMIN_ID" # # Setup RBAC Creator of Project A # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "creator_a@example.net" \ "project_a_creator") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_CREATOR_ID" # Adding second creator user in project_a USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "creator2_a@example.net" \ "project_a_creator_2" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_CREATOR_ID" # # Setup RBAC Observer of Project A # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "observer_a@example.net" \ "project_a_observer") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_OBSERVER_ID" # # Setup RBAC Auditor of Project A # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "auditor_a@example.net" \ "project_a_auditor") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_AUDIT_ID" # # Setup RBAC Admin of Project B # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "admin_b@example.net" \ "project_b_admin") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_ADMIN_ID" # # Setup RBAC Creator of Project B # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "creator_b@example.net" \ "project_b_creator") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_CREATOR_ID" # # Setup RBAC Observer of Project B # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "observer_b@example.net" \ "project_b_observer") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_OBSERVER_ID" # # Setup RBAC auditor of Project B # USER_ID=$(get_id openstack user create \ --password "$PASSWORD" \ --email "auditor_b@example.net" \ "project_b_auditor") openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_AUDIT_ID" # # Setup Barbican Endpoint # BARBICAN_SERVICE=$(openstack service create \ --name barbican \ --description "Barbican Service" \ 'key-manager' \ | grep " id " | get_field 2) openstack endpoint create \ --os-identity-api-version 3 \ --region RegionOne \ $BARBICAN_SERVICE \ public "http://$SERVICE_HOST/key-manager" openstack endpoint create \ --os-identity-api-version 3 \ --region RegionOne \ $BARBICAN_SERVICE \ internal "http://$SERVICE_HOST/key-manager" } # PyKMIP functions # ---------------- # install_pykmip - install the PyKMIP python module # create keys and certificate for server function install_pykmip { pip_install 'pykmip' if is_service_enabled pykmip-server; then [ ! -d ${PYKMIP_CONF_DIR} ] && sudo mkdir -p ${PYKMIP_CONF_DIR} sudo chown ${USER} ${PYKMIP_CONF_DIR} [ ! -d ${PYKMIP_LOG_DIR} ] && sudo mkdir -p ${PYKMIP_LOG_DIR} sudo chown ${USER} ${PYKMIP_LOG_DIR} init_CA if [ ! -e ${PYKMIP_SERVER_KEY} ]; then make_cert ${INT_CA_DIR} 'pykmip-server' 'pykmip-server' chmod 400 ${PYKMIP_SERVER_KEY} fi if [ ! -e ${PYKMIP_CLIENT_KEY} ]; then make_cert ${INT_CA_DIR} 'pykmip-client' 'pykmip-client' chmod 400 ${PYKMIP_CLIENT_KEY} fi if [ ! -e ${PYKMIP_CONF} ]; then cat > ${PYKMIP_CONF} < .tmp.setup.inf < .tmp.ca.cfg < .tmp.kra.cfg </plugins/org.python.pydev_2.8.2.2013090511/pysrc" # Note: for Pycharm IDE users # Follow the instruction in link below # https://github.com/cloudkeep/barbican/wiki/Developer-Guide-for-Contributors#debugging-using-pycharm # Following are two commands to start barbican in debug mode # (1) ./barbican.sh debug # (2) ./barbican.sh debug --pydev-debug-host localhost --pydev-debug-port 5678 if [ -z $3 ] ; then debug_host=localhost else debug_host=$3 fi if [ -z $5 ] ; then debug_port=5678 else debug_port=$5 fi echo "Starting barbican in debug mode ..." --pydev-debug-host $debug_host --pydev-debug-port $debug_port PYDEV_DEBUG_PARAM="--env PYDEV_DEBUG_HOST=$debug_host --env PYDEV_DEBUG_PORT=$debug_port" uwsgi --master --emperor $CONFIG_DIR/vassals -H $VENV_DIR $PYDEV_DEBUG_PARAM } start_barbican() { # Start barbican server up. # Note: Add ' --stats :9314' to run a stats server on port 9314 echo "Starting barbican..." uwsgi --master --emperor $CONFIG_DIR/vassals -H $VENV_DIR } stop_barbican() { echo "Stopping barbican..." killall -KILL uwsgi } install_barbican() { # Copy conf file to home directory so oslo.config can find it cp $LOCAL_CONFIG ~ # Copy the other config files to the /etc location if [ ! -d $CONFIG_DIR ]; then sudo mkdir -p $CONFIG_DIR sudo chown $USER $CONFIG_DIR fi cp -rf $LOCAL_CONFIG_DIR/* $CONFIG_DIR/ # Create a SQLite db location. if [ ! -d $DB_DIR ]; then sudo mkdir -p $DB_DIR sudo chown $USER $DB_DIR fi # Install Python dependencies pip install -r requirements.txt pip install -r test-requirements.txt # Install uWSGI pip install uwsgi # Install source code into the Python path as if packaged. pip install -e . # Run unit tests python setup.py testr start_barbican } case "$1" in install) install_barbican ;; debug) debug_barbican $* ;; start) start_barbican ;; stop) stop_barbican ;; restart) stop_barbican sleep 5 start_barbican ;; *) echo "Usage: barbican.sh {install|start|stop|debug |restart}" echo "where debug_params are: --pydev-debug-host --pydev-debug-port , defaults to 'localhost' and defaults to '5678'" exit 1 esac barbican-6.0.0/bin/keystone_data.sh0000777000175100017510000001432013245511001017235 0ustar zuulzuul00000000000000#!/bin/bash #------------------------------------ # the devstack way # cd # source openrc nova service # This sets up an admin user and the service project and passport in environment #------------------------------------ # alternately export values for export OS_AUTH_URL="http://localhost:5000/v2.0" # your secret password export OS_PASSWORD="password" export OS_PROJECT_NAME="service" export OS_USERNAME="nova" # -------------------------------- # alternately service_token and endpoint #export OS_TOKEN=orange #export OS_URL=http://localhost:35357/v2.0 # ======================================== echo " OS_URL="$OS_URL echo " OS_TOKEN="$OS_TOKEN echo " OS_PROJECT_NAME="$OS_PROJECT_NAME echo " OS_USERNAME="$OS_USERNAME echo " OS_PASSWORD="$OS_PASSWORD echo " OS_AUTH_URL="$OS_AUTH_URL #test with openstack project list #------------------------------------------------------------ # Adding the Key Manager Service: barbican #------------------------------------------------------------ ENABLED_SERVICES="barbican" SERVICE_PASSWORD="orange" SERVICE_HOST="localhost" SERVICE_PROJECT_NAME="service" KEYSTONE_CATALOG_BACKEND='sql' #============================ # Lookups SERVICE_PROJECT=$(openstack project show "$SERVICE_PROJECT_NAME" -f value -c id) ADMIN_ROLE=$(openstack role show admin -f value -c id) # Ports to avoid: 3333, 5000, 8773, 8774, 8776, 9292, 9696, 35357 # Barbican if [[ "$ENABLED_SERVICES" =~ "barbican" ]]; then # # Setup Default Admin User # BARBICAN_USER=$(openstack user create \ --password "$SERVICE_PASSWORD" \ --project $SERVICE_PROJECT \ --email "barbican@example.com" \ barbican -f value -c id) openstack role add --project $SERVICE_PROJECT \ --user $BARBICAN_USER \ $ADMIN_ROLE # # Setup Default service-admin User # SERVICE_ADMIN=$(openstack user create \ --password "$SERVICE_PASSWORD" \ --email "service-admin@example.com" \ "service-admin" -f value -c id) SERVICE_ADMIN_ROLE=$(openstack role create \ "key-manager:service-admin" -f value -c id) openstack role add \ --user "$SERVICE_ADMIN" \ --project "$SERVICE_PROJECT" \ "$SERVICE_ADMIN_ROLE" # # Setup RBAC User Projects and Roles # PASSWORD="barbican" PROJECT_A_ID=$(openstack project create "project_a" -f value -c id) PROJECT_B_ID=$(openstack project create "project_b" -f value -c id) ROLE_ADMIN_ID=$(openstack role show admin -f value -c id) ROLE_CREATOR_ID=$(openstack role create "creator" -f value -c id) ROLE_OBSERVER_ID=$(openstack role create "observer" -f value -c id) ROLE_AUDIT_ID=$(openstack role create "audit" -f value -c id) # # Setup RBAC Admin of Project A # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "admin_a@example.net" \ "project_a_admin" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_ADMIN_ID" # # Setup RBAC Creator of Project A # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "creator_a@example.net" \ "project_a_creator" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_CREATOR_ID" # Adding second creator user in project_a USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "creator2_a@example.net" \ "project_a_creator_2" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_CREATOR_ID" # # Setup RBAC Observer of Project A # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "observer_a@example.net" \ "project_a_observer" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_OBSERVER_ID" # # Setup RBAC Auditor of Project A # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "auditor_a@example.net" \ "project_a_auditor" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_A_ID" \ "$ROLE_AUDIT_ID" # # Setup RBAC Admin of Project B # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "admin_b@example.net" \ "project_b_admin" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_ADMIN_ID" # # Setup RBAC Creator of Project B # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "creator_b@example.net" \ "project_b_creator" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_CREATOR_ID" # # Setup RBAC Observer of Project B # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "observer_b@example.net" \ "project_b_observer" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_OBSERVER_ID" # # Setup RBAC auditor of Project B # USER_ID=$(openstack user create \ --password "$PASSWORD" \ --email "auditor_b@example.net" \ "project_b_auditor" -f value -c id) openstack role add \ --user "$USER_ID" \ --project "$PROJECT_B_ID" \ "$ROLE_AUDIT_ID" # # Setup Barbican Endpoint # if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then BARBICAN_SERVICE=$(openstack service create \ --name barbican \ --description "Barbican Service" \ 'key-manager' -f value -c id) openstack endpoint create \ $BARBICAN_SERVICE \ --region RegionOne \ internal "http://$SERVICE_HOST:9311" openstack endpoint create \ $BARBICAN_SERVICE \ --region RegionOne \ public "http://$SERVICE_HOST:9311" fi fi barbican-6.0.0/bin/versionbuild.py0000777000175100017510000000566413245511001017141 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Version build stamping script. This module generates and inserts a patch component of the semantic version stamp for Barbican, intended to ensure that a strictly monotonically increasing version is produced for consecutive development releases. Some repositories such as yum use this increasing semantic version to select the latest package for installations. This process may not be required if a bug in the 'pbr' library is fixed: https://bugs.launchpad.net/pbr/+bug/1206730 """ import os import re from datetime import datetime from time import mktime # Determine version of this application. SETUP_FILE = 'setup.cfg' VERSIONFILE = os.path.join(SETUP_FILE) current_dir = os.getcwd() if current_dir.endswith('bin'): VERSIONFILE = os.path.join('..', SETUP_FILE) def get_patch(): """Return a strictly monotonically increasing version patch. This method is providing the 'patch' component of the semantic version stamp for Barbican. It currently returns an epoch in seconds, but could use a build id from the build system. """ dt = datetime.now() return int(mktime(dt.timetuple())) def update_versionfile(patch): """Update the version information in setup.cfg per the provided patch. PBR will generate a version stamp based on the version attribute in the setup.cfg file, appending information such as git SHA code to it. To make this generated version friendly to packaging systems such as YUM, this function appends the provided patch to the base version. This function assumes the base version in setup.cfg is of the form 'xx.yy' such as '2014.2'. It will replace a third element found after this base with the provided patch. """ version_regex = re.compile(r'(^\s*version\s*=\s*\w*\.\w*)(.*)') temp_name = VERSIONFILE + '~' with open(VERSIONFILE, 'r') as file_old: with open(temp_name, 'w') as file_new: for line in file_old: match = version_regex.match(line) if match: file_new.write(''.join( [match.group(1).strip(), '.', str(patch), '\n'])) else: file_new.write(line) # Replace the original setup.cfg with the modified one. os.rename(temp_name, VERSIONFILE) if __name__ == '__main__': patch = get_patch() update_versionfile(patch) barbican-6.0.0/bin/demo_requests.py0000777000175100017510000001615113245511001017304 0ustar zuulzuul00000000000000#!/usr/bin/env python # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Demonstrates the various Barbican API calls, against an unauthenticated local Barbican server. This script is intended to be a lightweight way to demonstrate and 'smoke test' the Barbican API via it's REST API, with no other dependencies required including the Barbican Python client. Note that this script is not intended to replace DevStack or Tempest style testing. """ import json import logging import requests import sys LOG = logging.getLogger(__name__) LOG.setLevel(logging.DEBUG) LOG.addHandler(logging.StreamHandler(sys.stdout)) # Project ID: proj = '12345678' # Endpoint: end_point = 'http://localhost:9311' version = 'v1' # Basic header info: hdrs = {'X-Project-Id': proj, 'content-type': 'application/json'} # Consumer data. payload_consumer = { 'name': 'foo-service', 'URL': 'https://www.fooservice.com/widgets/1234' } def demo_version(): """Get version""" v = requests.get(end_point, headers=hdrs) LOG.info('Version: {0}\n'.format(v.text)) def demo_store_secret_one_step_text(suffix=None, suppress=False): """Store secret (1-step):""" ep_1step = '/'.join([end_point, version, 'secrets']) secret = 'my-secret-here' if suffix: secret = '-'.join([secret, suffix]) # POST metadata: payload = { 'payload': secret, 'payload_content_type': 'text/plain' } pr = requests.post(ep_1step, data=json.dumps(payload), headers=hdrs) pr_j = pr.json() secret_ref = pr.json().get('secret_ref') # GET secret: hdrs_get = dict(hdrs) hdrs_get.update({ 'accept': 'text/plain'}) gr = requests.get(secret_ref, headers=hdrs_get) if not suppress: LOG.info('Get secret 1-step (text): {0}\n'.format(gr.content)) return secret_ref def demo_store_secret_two_step_binary(): """Store secret (2-step):""" secret = 'bXktc2VjcmV0LWhlcmU=' # base64 of 'my secret' ep_2step = '/'.join([end_point, version, 'secrets']) # POST metadata: payload = {} pr = requests.post(ep_2step, data=json.dumps(payload), headers=hdrs) pr_j = pr.json() secret_ref = pr_j.get('secret_ref') assert secret_ref # PUT data to store: hdrs_put = dict(hdrs) hdrs_put.update({ 'content-type': 'application/octet-stream', 'content-encoding': 'base64'} ) requests.put(secret_ref, data=secret, headers=hdrs_put) # GET secret: hdrs_get = dict(hdrs) hdrs_get.update({ 'accept': 'application/octet-stream'}) gr = requests.get(secret_ref, headers=hdrs_get) LOG.info('Get secret 2-step (binary): {0}\n'.format(gr.content)) return secret_ref def demo_retrieve_secret_list(): ep_list = '/'.join([end_point, version, 'secrets']) hdrs_get = dict(hdrs) gr = requests.get(ep_list, headers=hdrs_get) gr_j = gr.json() LOG.info('Get secret list:') for secret_info in gr_j.get('secrets'): LOG.info(' {0}'.format(secret_info.get('secret_ref'))) LOG.info('\n') def demo_store_container_rsa(suffix=None): """Store secret (2-step):""" ep_cont = '/'.join([end_point, version, 'containers']) secret_prk = demo_store_secret_one_step_text(suffix=suffix, suppress=True) secret_puk = demo_store_secret_one_step_text(suffix=suffix, suppress=True) secret_pp = demo_store_secret_one_step_text(suffix=suffix, suppress=True) # POST metadata: payload = { "name": "container name", "type": "rsa", "secret_refs": [{ "name": "private_key", "secret_ref": secret_prk }, { "name": "public_key", "secret_ref": secret_puk }, { "name": "private_key_passphrase", "secret_ref": secret_pp }] } pr = requests.post(ep_cont, data=json.dumps(payload), headers=hdrs) pr_j = pr.json() container_ref = pr.json().get('container_ref') # GET container: hdrs_get = dict(hdrs) gr = requests.get(container_ref, headers=hdrs_get) LOG.info('Get RSA container: {0}\n'.format(gr.content)) return container_ref def demo_retrieve_container_list(): ep_list = '/'.join([end_point, version, 'containers']) hdrs_get = dict(hdrs) gr = requests.get(ep_list, headers=hdrs_get) gr_j = gr.json() LOG.info('Get container list:') for secret_info in gr_j.get('containers'): LOG.info(' {0}'.format(secret_info.get('container_ref'))) LOG.info('\n') def demo_delete_secret(secret_ref): """Delete secret by its HATEOAS reference""" ep_delete = secret_ref # DELETE secret: dr = requests.delete(ep_delete, headers=hdrs) gr = requests.get(secret_ref, headers=hdrs) assert(404 == gr.status_code) LOG.info('...Deleted Secret: {0}\n'.format(secret_ref)) def demo_delete_container(container_ref): """Delete container by its HATEOAS reference""" ep_delete = container_ref # DELETE container: dr = requests.delete(ep_delete, headers=hdrs) gr = requests.get(container_ref, headers=hdrs) assert(404 == gr.status_code) LOG.info('...Deleted Container: {0}\n'.format(container_ref)) def demo_consumers_add(container_ref): """Add consumer to a container:""" ep_add = '/'.join([container_ref, 'consumers']) # POST metadata: pr = requests.post(ep_add, data=json.dumps(payload_consumer), headers=hdrs) pr_consumers = pr.json().get('consumers') assert pr_consumers assert(len(pr_consumers) == 1) LOG.info('...Consumer response: {0}'.format(pr_consumers)) def demo_consumers_delete(container_ref): """Delete consumer from a container:""" ep_delete = '/'.join([container_ref, 'consumers']) # POST metadata: pr = requests.delete( ep_delete, data=json.dumps(payload_consumer), headers=hdrs) pr_consumers = pr.json().get('consumers') assert(not pr_consumers) LOG.info('...Deleted Consumer from: {0}'.format(container_ref)) if __name__ == '__main__': demo_version() # Demonstrate secret actions: secret_ref = demo_store_secret_one_step_text() secret_ref2 = demo_store_secret_two_step_binary() demo_retrieve_secret_list() demo_delete_secret(secret_ref) demo_delete_secret(secret_ref2) # Demonstrate container and consumer actions: container_ref = demo_store_container_rsa(suffix='1') container_ref2 = demo_store_container_rsa(suffix='2') demo_retrieve_container_list() demo_consumers_add(container_ref) demo_consumers_add(container_ref) # Should be idempotent demo_consumers_delete(container_ref) demo_consumers_add(container_ref) demo_delete_container(container_ref) demo_delete_container(container_ref2) barbican-6.0.0/bin/barbican-api0000777000175100017510000000056613245511001016311 0ustar zuulzuul00000000000000#!/usr/bin/env python from paste import deploy from paste import httpserver def run(): prop_dir = 'etc/barbican' application = deploy.loadapp( 'config:{prop_dir}/barbican-api-paste.ini'. format(prop_dir=prop_dir), name='main', relative_to='/') httpserver.serve(application, host='0.0.0.0', port='9311') if __name__ == '__main__': run() barbican-6.0.0/LICENSE0000666000175100017510000002614713245511001014313 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2013, Rackspace (http://www.rackspace.com) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. barbican-6.0.0/HACKING.rst0000666000175100017510000000550013245511001015072 0ustar zuulzuul00000000000000Barbican Style Commandments ============================ - Step 1: Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ - Step 2: Read on Barbican Specific Commandments ------------------------------- - [B310] Check for improper use of logging format arguments. - [B311] Use assertIsNone(...) instead of assertEqual(None, ...). - [B312] Use assertTrue(...) rather than assertEqual(True, ...). - [B314] str() and unicode() cannot be used on an exception. Remove or use six.text_type(). - [B317] 'oslo_' should be used instead of 'oslo.' - [B318] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs. - [B319] Ensure to not use xrange(). - [B320] Do not use LOG.warn as it's deprecated. - [B321] Use assertIsNotNone(...) rather than assertNotEqual(None, ...) or assertIsNot(None, ...). Creating Unit Tests ------------------- For every new feature, unit tests should be created that both test and (implicitly) document the usage of said feature. If submitting a patch for a bug that had no unit test, a new passing unit test should be added. If a submitted bug fix does have a unit test, be sure to add a new one that fails without the patch and passes with the patch. Running Tests ------------- The testing system is based on a combination of tox and testr. If you just want to run the whole suite, run `tox` and all will be fine. However, if you'd like to dig in a bit more, you might want to learn some things about testr itself. A basic walkthrough for OpenStack can be found at http://wiki.openstack.org/testr OpenStack Trademark ------------------- OpenStack is a registered trademark of OpenStack, LLC, and uses the following capitalization: OpenStack Commit Messages --------------- Using a common format for commit messages will help keep our git history readable. Follow these guidelines: First, provide a brief summary (it is recommended to keep the commit title under 50 chars). The first line of the commit message should provide an accurate description of the change, not just a reference to a bug or blueprint. It must be followed by a single blank line. Following your brief summary, provide a more detailed description of the patch, manually wrapping the text at 72 characters. This description should provide enough detail that one does not have to refer to external resources to determine its high-level functionality. Once you use 'git review', two lines will be appended to the commit message: a blank line followed by a 'Change-Id'. This is important to correlate this commit with a specific review in Gerrit, and it should not be modified. For further information on constructing high quality commit messages, and how to split up commits into a series of changes, consult the project wiki: http://wiki.openstack.org/GitCommitMessages barbican-6.0.0/tox.ini0000666000175100017510000001030113245511001014602 0ustar zuulzuul00000000000000[tox] minversion = 2.0 envlist = py35,py27,pep8,docs skipsdist = True [testenv] usedevelop = True install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -U {opts} {packages} deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = oslo-config-generator --config-file etc/oslo-config-generator/barbican.conf --output-file etc/barbican/barbican.conf /usr/bin/find . -type f -name "*.py[c|o]" -delete rm -f .testrepository/times.dbm coverage erase python setup.py testr --coverage --testr-args='{posargs}' coverage report -m whitelist_externals = rm [testenv:cover] deps = {[testenv]deps} diff_cover commands = coverage erase python setup.py testr --coverage --testr-args='{posargs}' coverage xml diff-cover --fail-under 100 --compare-branch master coverage.xml [testenv:releasenotes] commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [testenv:pep8] sitepackages = False commands = flake8 {posargs} # Run security linter bandit -r barbican -x tests -n5 [testenv:genconfig] whitelist_externals = bash envdir = {toxworkdir}/pep8 commands = oslo-config-generator --config-file etc/oslo-config-generator/barbican.conf [testenv:venv] commands = {posargs} [testenv:debug] commands = oslo_debug_helper {posargs} [testenv:py3pep8] # This hack is in place to allow us to run py3 based flake8 # without installing barbican. basepython = python3 install_command = /bin/echo {packages} commands = pip install "hacking>=0.10.0,<0.11" flake8 barbican setup.py [testenv:docs] commands= rm -rf doc/build api-guide/build api-ref/build python setup.py build_sphinx sphinx-build -W -b html api-guide/source api-guide/build/html sphinx-build -W -b html api-ref/source api-ref/build/html whitelist_externals = rm [testenv:api-guide] # This environment is called from CI scripts to test and publish # the API Guide to developer.openstack.org. commands = sphinx-build -W -b html -d api-guide/build/doctrees api-guide/source api-guide/build/html [testenv:api-ref] # This environment is called from CI scripts to test and publish # the API Ref to developer.openstack.org. commands = sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html [testenv:functional] # This tox env is purely to make local test development easier # Note: This requires local running instances of Barbican and Keystone deps = -r{toxinidir}/test-requirements.txt setenv = OS_TEST_PATH={toxinidir}/functionaltests commands = /usr/bin/find . -type f -name "*.py[c|o]" -delete /bin/bash {toxinidir}/functionaltests/pretty_tox.sh '{posargs}' passenv = KMIP_PLUGIN_ENABLED [testenv:py35functional] basepython = python3 deps = -r{toxinidir}/test-requirements.txt setenv = OS_TEST_PATH={toxinidir}/functionaltests commands = /usr/bin/find . -type f -name "*.py[c|o]" -delete /bin/bash {toxinidir}/functionaltests/pretty_tox.sh '{posargs}' passenv = KMIP_PLUGIN_ENABLED [testenv:cmd] # This tox env is purely to make local test development easier # Note: This requires local running instances of Barbican and Keystone deps = -r{toxinidir}/test-requirements.txt setenv = OS_TEST_PATH={toxinidir}/barbican/cmd/functionaltests commands = /usr/bin/find . -type f -name "*.py[c|o]" -delete /bin/bash {toxinidir}/functionaltests/pretty_tox.sh '{posargs}' [flake8] exclude = .git,.idea,.tox,bin,dist,debian,rpmbuild,tools,*.egg-info,*.eggs,contrib, *docs/target,*.egg,build [testenv:bandit] deps = -r{toxinidir}/test-requirements.txt commands = bandit -r barbican -x tests -n5 [testenv:bindep] # Do not install any requirements. We want this to be fast and work even if # system dependencies are missing, since it's used to tell you what system # dependencies are missing! This also means that bindep must be installed # separately, outside of the requirements files. deps = bindep commands = bindep test [testenv:genpolicy] envdir = {toxworkdir}/pep8 commands = oslopolicy-sample-generator --config-file=etc/oslo-config-generator/policy.conf [hacking] local-check-factory = barbican.hacking.checks.factory barbican-6.0.0/.mailmap0000666000175100017510000000065113245511001014717 0ustar zuulzuul00000000000000# Format is: # # John Wood Malini K. Bhandaru Malini K. Bhandaru Malini Bhandaru barbican-6.0.0/barbican/0000775000175100017510000000000013245511177015051 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/api/0000775000175100017510000000000013245511177015622 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/api/app.py0000666000175100017510000000644513245511001016751 0ustar zuulzuul00000000000000# Copyright (c) 2013-2015 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ API application handler for Barbican """ import os from paste import deploy import pecan try: import newrelic.agent newrelic_loaded = True except ImportError: newrelic_loaded = False from oslo_log import log from barbican.api.controllers import versions from barbican.api import hooks from barbican.common import config from barbican.model import repositories from barbican import queue CONF = config.CONF if newrelic_loaded: newrelic.agent.initialize( os.environ.get('NEW_RELIC_CONFIG_FILE', '/etc/newrelic/newrelic.ini'), os.environ.get('NEW_RELIC_ENVIRONMENT') ) def build_wsgi_app(controller=None, transactional=False): """WSGI application creation helper :param controller: Overrides default application controller :param transactional: Adds transaction hook for all requests """ request_hooks = [hooks.JSONErrorHook()] if transactional: request_hooks.append(hooks.BarbicanTransactionHook()) if newrelic_loaded: request_hooks.insert(0, hooks.NewRelicHook()) # Create WSGI app wsgi_app = pecan.Pecan( controller or versions.AVAILABLE_VERSIONS[versions.DEFAULT_VERSION](), hooks=request_hooks, force_canonical=False ) # clear the session created in controller initialization 60 repositories.clear() return wsgi_app def main_app(func): def _wrapper(global_config, **local_conf): # Queuing initialization queue.init(CONF, is_server_side=False) # Configure oslo logging and configuration services. log.setup(CONF, 'barbican') config.setup_remote_pydev_debug() # Initializing the database engine and session factory before the app # starts ensures we don't lose requests due to lazy initialization of # db connections. repositories.setup_database_engine_and_factory( initialize_secret_stores=True ) wsgi_app = func(global_config, **local_conf) if newrelic_loaded: wsgi_app = newrelic.agent.WSGIApplicationWrapper(wsgi_app) LOG = log.getLogger(__name__) LOG.info('Barbican app created and initialized') return wsgi_app return _wrapper @main_app def create_main_app(global_config, **local_conf): """uWSGI factory method for the Barbican-API application.""" # Setup app with transactional hook enabled return build_wsgi_app(versions.V1Controller(), transactional=True) def create_version_app(global_config, **local_conf): wsgi_app = pecan.make_app(versions.VersionsController()) return wsgi_app def get_api_wsgi_script(): conf = '/etc/barbican/barbican-api-paste.ini' application = deploy.loadapp('config:%s' % conf) return application barbican-6.0.0/barbican/api/hooks.py0000666000175100017510000000334713245511001017312 0ustar zuulzuul00000000000000# Copyright (c) 2015 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import pecan import webob from oslo_serialization import jsonutils try: import newrelic.agent newrelic_loaded = True except ImportError: newrelic_loaded = False from barbican.model import repositories class JSONErrorHook(pecan.hooks.PecanHook): def on_error(self, state, exc): if isinstance(exc, webob.exc.HTTPError): exc.body = jsonutils.dump_as_bytes({ 'code': exc.status_int, 'title': exc.title, 'description': exc.detail }) state.response.content_type = "application/json" return exc.body class BarbicanTransactionHook(pecan.hooks.TransactionHook): """Custom hook for Barbican transactions.""" def __init__(self): super(BarbicanTransactionHook, self).__init__( start=repositories.start, start_ro=repositories.start_read_only, commit=repositories.commit, rollback=repositories.rollback, clear=repositories.clear ) class NewRelicHook(pecan.hooks.PecanHook): def on_error(self, state, exc): if newrelic_loaded: newrelic.agent.record_exception() barbican-6.0.0/barbican/api/__init__.py0000666000175100017510000001073513245511001017725 0ustar zuulzuul00000000000000# Copyright (c) 2013-2015 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ API handler for Barbican """ import pkgutil import six from oslo_policy import policy from oslo_serialization import jsonutils as json import pecan from barbican.common import config from barbican.common import exception from barbican.common import utils from barbican import i18n as u LOG = utils.getLogger(__name__) CONF = config.CONF class ApiResource(object): """Base class for API resources.""" pass def load_body(req, resp=None, validator=None): """Helper function for loading an HTTP request body from JSON. This body is placed into into a Python dictionary. :param req: The HTTP request instance to load the body from. :param resp: The HTTP response instance. :param validator: The JSON validator to enforce. :return: A dict of values from the JSON request. """ try: body = req.body_file.read(CONF.max_allowed_request_size_in_bytes) req.body_file.seek(0) except IOError: LOG.exception("Problem reading request JSON stream.") pecan.abort(500, u._('Read Error')) try: # TODO(jwood): Investigate how to get UTF8 format via openstack # jsonutils: # parsed_body = json.loads(raw_json, 'utf-8') parsed_body = json.loads(body) strip_whitespace(parsed_body) except ValueError: LOG.exception("Problem loading request JSON.") pecan.abort(400, u._('Malformed JSON')) if validator: try: parsed_body = validator.validate(parsed_body) except exception.BarbicanHTTPException as e: LOG.exception(six.text_type(e)) pecan.abort(e.status_code, e.client_message) return parsed_body def generate_safe_exception_message(operation_name, excep): """Generates an exception message that is 'safe' for clients to consume. A 'safe' message is one that doesn't contain sensitive information that could be used for (say) cryptographic attacks on Barbican. That generally means that em.CryptoXxxx should be captured here and with a simple message created on behalf of them. :param operation_name: Name of attempted operation, with a 'Verb noun' format (e.g. 'Create Secret). :param excep: The Exception instance that halted the operation. :return: (status, message) where 'status' is one of the webob.exc.HTTP_xxx codes, and 'message' is the sanitized message associated with the error. """ message = None reason = None status = 500 try: raise excep except policy.PolicyNotAuthorized: message = u._( '{operation} attempt not allowed - ' 'please review your ' 'user/project privileges').format(operation=operation_name) status = 403 except exception.BarbicanHTTPException as http_exception: reason = http_exception.client_message status = http_exception.status_code except Exception: message = u._('{operation} failure seen - please contact site ' 'administrator.').format(operation=operation_name) if reason: message = u._('{operation} issue seen - {reason}.').format( operation=operation_name, reason=reason) return status, message @pkgutil.simplegeneric def get_items(obj): """This is used to get items from either a list or a dictionary. While false generator is need to process scalar object """ while False: yield None @get_items.register(dict) def _json_object(obj): return obj.items() @get_items.register(list) def _json_array(obj): return enumerate(obj) def strip_whitespace(json_data): """Recursively trim values from the object passed in using get_items().""" for key, value in get_items(json_data): if hasattr(value, 'strip'): json_data[key] = value.strip() else: strip_whitespace(value) barbican-6.0.0/barbican/api/middleware/0000775000175100017510000000000013245511177017737 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/api/middleware/simple.py0000666000175100017510000000211213245511001021562 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A filter middleware that just outputs to logs, for instructive/sample purposes only. """ from barbican.api import middleware from barbican.common import utils LOG = utils.getLogger(__name__) class SimpleFilter(middleware.Middleware): def __init__(self, app): super(SimpleFilter, self).__init__(app) def process_request(self, req): """Just announce we have been called.""" LOG.debug("Calling SimpleFilter") return None barbican-6.0.0/barbican/api/middleware/__init__.py0000666000175100017510000000564213245511001022043 0ustar zuulzuul00000000000000# Copyright (c) 2013-2015 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Barbican middleware modules. """ import sys import webob.dec from barbican.common import utils LOG = utils.getLogger(__name__) class Middleware(object): """Base WSGI middleware wrapper These classes require an application to be initialized that will be called next. By default the middleware will simply call its wrapped app, or you can override __call__ to customize its behavior. """ def __init__(self, application): self.application = application @classmethod def factory(cls, global_conf, **local_conf): def filter(app): return cls(app) return filter def process_request(self, req): """Called on each request. If this returns None, the next application down the stack will be executed. If it returns a response then that response will be returned and execution will stop here. """ return None def process_response(self, response): """Do whatever you'd like to the response.""" return response @webob.dec.wsgify def __call__(self, req): response = self.process_request(req) if response: return response response = req.get_response(self.application) response.request = req return self.process_response(response) # Brought over from an OpenStack project class Debug(Middleware): """Debug helper class This class can be inserted into any WSGI application chain to get information about the request and response. """ @webob.dec.wsgify def __call__(self, req): LOG.debug(("*" * 40) + " REQUEST ENVIRON") for key, value in req.environ.items(): LOG.debug('%s=%s', key, value) LOG.debug(' ') resp = req.get_response(self.application) LOG.debug(("*" * 40) + " RESPONSE HEADERS") for (key, value) in resp.headers.items(): LOG.debug('%s=%s', key, value) LOG.debug(' ') resp.app_iter = self.print_generator(resp.app_iter) return resp @staticmethod def print_generator(app_iter): """Iterator that prints the contents of a wrapper string iterator.""" LOG.debug(("*" * 40) + " BODY") for part in app_iter: sys.stdout.write(part) sys.stdout.flush() yield part LOG.debug(' ') barbican-6.0.0/barbican/api/middleware/context.py0000666000175100017510000001325613245511001021770 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from barbican.api import middleware as mw from barbican.common import config from barbican.common import utils import barbican.context from barbican import i18n as u LOG = utils.getLogger(__name__) CONF = config.CONF class BaseContextMiddleware(mw.Middleware): def process_request(self, req): request_id = req.headers.get('x-openstack-request-id') if not request_id: request_id = 'req-' + utils.generate_uuid() setattr(req, 'request_id', request_id) def process_response(self, resp): resp.headers['x-openstack-request-id'] = resp.request.request_id LOG.info('Processed request: %(status)s - %(method)s %(url)s', {"status": resp.status, "method": resp.request.method, "url": resp.request.url}) return resp class ContextMiddleware(BaseContextMiddleware): def __init__(self, app): super(ContextMiddleware, self).__init__(app) def process_request(self, req): """Convert authentication information into a request context Generate a barbican.context.RequestContext object from the available authentication headers and store on the 'context' attribute of the req object. :param req: wsgi request object that will be given the context object :raises webob.exc.HTTPUnauthorized: when value of the X-Identity-Status header is not 'Confirmed' and anonymous access is disallowed """ super(ContextMiddleware, self).process_request(req) if req.headers.get('X-Identity-Status') == 'Confirmed': req.context = self._get_authenticated_context(req) elif CONF.allow_anonymous_access: req.context = self._get_anonymous_context() LOG.debug("==== Inserted barbican unauth " "request context: %s ====", req.context.to_dict()) else: raise webob.exc.HTTPUnauthorized() # Ensure that down wind mw.Middleware/app can see this context. req.environ['barbican.context'] = req.context def _get_anonymous_context(self): kwargs = { 'user': None, 'tenant': None, 'is_admin': False, 'read_only': True, } return barbican.context.RequestContext(**kwargs) def _get_authenticated_context(self, req): # NOTE(bcwaldon): X-Roles is a csv string, but we need to parse # it into a list to be useful roles_header = req.headers.get('X-Roles', '') roles = [r.strip().lower() for r in roles_header.split(',')] # NOTE(bcwaldon): This header is deprecated in favor of X-Auth-Token # NOTE(mkbhanda): keeping this just-in-case for swift deprecated_token = req.headers.get('X-Storage-Token') kwargs = { 'auth_token': req.headers.get('X-Auth-Token', deprecated_token), 'user': req.headers.get('X-User-Id'), 'project': req.headers.get('X-Project-Id'), 'roles': roles, 'is_admin': CONF.admin_role.strip().lower() in roles, 'request_id': req.request_id } if req.headers.get('X-Domain-Id'): kwargs['domain'] = req.headers['X-Domain-Id'] if req.headers.get('X-User-Domain-Id'): kwargs['user_domain'] = req.headers['X-User-Domain-Id'] if req.headers.get('X-Project-Domain-Id'): kwargs['project_domain'] = req.headers['X-Project-Domain-Id'] return barbican.context.RequestContext(**kwargs) class UnauthenticatedContextMiddleware(BaseContextMiddleware): def _get_project_id_from_header(self, req): project_id = req.headers.get('X-Project-Id') if not project_id: accept_header = req.headers.get('Accept') if not accept_header: req.headers['Accept'] = 'text/plain' raise webob.exc.HTTPBadRequest(detail=u._('Missing X-Project-Id')) return project_id def process_request(self, req): """Create a context without an authorized user.""" super(UnauthenticatedContextMiddleware, self).process_request(req) project_id = self._get_project_id_from_header(req) config_admin_role = CONF.admin_role.strip().lower() roles_header = req.headers.get('X-Roles', '') roles = [r.strip().lower() for r in roles_header.split(',') if r] # If a role wasn't specified we default to admin if not roles: roles = [config_admin_role] kwargs = { 'user': req.headers.get('X-User-Id'), 'domain': req.headers.get('X-Domain-Id'), 'user_domain': req.headers.get('X-User-Domain-Id'), 'project_domain': req.headers.get('X-Project-Domain-Id'), 'project': project_id, 'roles': roles, 'is_admin': config_admin_role in roles, 'request_id': req.request_id } context = barbican.context.RequestContext(**kwargs) req.environ['barbican.context'] = context barbican-6.0.0/barbican/api/controllers/0000775000175100017510000000000013245511177020170 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/api/controllers/quotas.py0000666000175100017510000001173613245511001022052 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cisco Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from barbican import api from barbican.api import controllers from barbican.common import exception from barbican.common import quota from barbican.common import resources as res from barbican.common import utils from barbican.common import validators from barbican import i18n as u LOG = utils.getLogger(__name__) def _project_quotas_not_found(): """Throw exception indicating project quotas not found.""" pecan.abort(404, u._('Not Found. Sorry but your project quotas are in ' 'another castle.')) class QuotasController(controllers.ACLMixin): """Handles quota retrieval requests.""" def __init__(self): LOG.debug('=== Creating QuotasController ===') self.quota_driver = quota.QuotaDriver() @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Quotas')) @controllers.enforce_rbac('quotas:get') def on_get(self, external_project_id, **kwargs): LOG.debug('=== QuotasController GET ===') # make sure project exists res.get_or_create_project(external_project_id) resp = self.quota_driver.get_quotas(external_project_id) return resp class ProjectQuotasController(controllers.ACLMixin): """Handles project quota requests.""" def __init__(self, project_id): LOG.debug('=== Creating ProjectQuotasController ===') self.passed_project_id = project_id self.validator = validators.ProjectQuotaValidator() self.quota_driver = quota.QuotaDriver() @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Project Quotas')) @controllers.enforce_rbac('project_quotas:get') def on_get(self, external_project_id, **kwargs): LOG.debug('=== ProjectQuotasController GET ===') resp = self.quota_driver.get_project_quotas(self.passed_project_id) if resp: return resp else: _project_quotas_not_found() @index.when(method='PUT', template='json') @controllers.handle_exceptions(u._('Project Quotas')) @controllers.enforce_rbac('project_quotas:put') @controllers.enforce_content_types(['application/json']) def on_put(self, external_project_id, **kwargs): LOG.debug('=== ProjectQuotasController PUT ===') if not pecan.request.body: raise exception.NoDataToProcess() api.load_body(pecan.request, validator=self.validator) self.quota_driver.set_project_quotas(self.passed_project_id, kwargs['project_quotas']) LOG.info('Put Project Quotas') pecan.response.status = 204 @index.when(method='DELETE', template='json') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Project Quotas')) @controllers.enforce_rbac('project_quotas:delete') def on_delete(self, external_project_id, **kwargs): LOG.debug('=== ProjectQuotasController DELETE ===') try: self.quota_driver.delete_project_quotas(self.passed_project_id) except exception.NotFound: LOG.info('Delete Project Quotas - Project not found') _project_quotas_not_found() else: LOG.info('Delete Project Quotas') pecan.response.status = 204 class ProjectsQuotasController(controllers.ACLMixin): """Handles projects quota retrieval requests.""" def __init__(self): LOG.debug('=== Creating ProjectsQuotaController ===') self.quota_driver = quota.QuotaDriver() @pecan.expose() def _lookup(self, project_id, *remainder): return ProjectQuotasController(project_id), remainder @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Project Quotas')) @controllers.enforce_rbac('project_quotas:get') def on_get(self, external_project_id, **kwargs): resp = self.quota_driver.get_project_quotas_list( offset_arg=kwargs.get('offset', 0), limit_arg=kwargs.get('limit', None) ) return resp barbican-6.0.0/barbican/api/controllers/transportkeys.py0000666000175100017510000001373113245511001023463 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from six.moves.urllib import parse from barbican import api from barbican.api import controllers from barbican.common import exception from barbican.common import hrefs from barbican.common import utils from barbican.common import validators from barbican import i18n as u from barbican.model import models from barbican.model import repositories as repo LOG = utils.getLogger(__name__) def _transport_key_not_found(): """Throw exception indicating transport key not found.""" pecan.abort(404, u._('Not Found. Transport Key not found.')) def _invalid_transport_key_id(): """Throw exception indicating transport key id is invalid.""" pecan.abort(404, u._('Not Found. Provided transport key id is invalid.')) class TransportKeyController(controllers.ACLMixin): """Handles transport key retrieval requests.""" def __init__(self, transport_key_id, transport_key_repo=None): LOG.debug('=== Creating TransportKeyController ===') self.transport_key_id = transport_key_id self.repo = transport_key_repo or repo.TransportKeyRepo() @pecan.expose(generic=True) def index(self, external_project_id, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET') @controllers.handle_exceptions(u._('Transport Key retrieval')) @controllers.enforce_rbac('transport_key:get') def on_get(self, external_project_id): LOG.debug("== Getting transport key for %s", external_project_id) transport_key = self.repo.get(entity_id=self.transport_key_id) if not transport_key: _transport_key_not_found() pecan.override_template('json', 'application/json') return transport_key @index.when(method='DELETE') @controllers.handle_exceptions(u._('Transport Key deletion')) @controllers.enforce_rbac('transport_key:delete') def on_delete(self, external_project_id, **kwargs): LOG.debug("== Deleting transport key ===") try: self.repo.delete_entity_by_id( entity_id=self.transport_key_id, external_project_id=external_project_id) # TODO(alee) response should be 204 on success # pecan.response.status = 204 except exception.NotFound: LOG.exception('Problem deleting transport_key') _transport_key_not_found() class TransportKeysController(controllers.ACLMixin): """Handles transport key list requests.""" def __init__(self, transport_key_repo=None): LOG.debug('Creating TransportKeyController') self.repo = transport_key_repo or repo.TransportKeyRepo() self.validator = validators.NewTransportKeyValidator() @pecan.expose() def _lookup(self, transport_key_id, *remainder): if not utils.validate_id_is_uuid(transport_key_id): _invalid_transport_key_id() return TransportKeyController(transport_key_id, self.repo), remainder @pecan.expose(generic=True) def index(self, external_project_id, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Transport Key(s) retrieval')) @controllers.enforce_rbac('transport_keys:get') def on_get(self, external_project_id, **kw): LOG.debug('Start transport_keys on_get') plugin_name = kw.get('plugin_name', None) if plugin_name is not None: plugin_name = parse.unquote_plus(plugin_name) result = self.repo.get_by_create_date( plugin_name=plugin_name, offset_arg=kw.get('offset', 0), limit_arg=kw.get('limit', None), suppress_exception=True ) transport_keys, offset, limit, total = result if not transport_keys: transport_keys_resp_overall = {'transport_keys': [], 'total': total} else: transport_keys_resp = [ hrefs.convert_transport_key_to_href(s.id) for s in transport_keys ] transport_keys_resp_overall = hrefs.add_nav_hrefs( 'transport_keys', offset, limit, total, {'transport_keys': transport_keys_resp} ) transport_keys_resp_overall.update({'total': total}) return transport_keys_resp_overall @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('Transport Key Creation')) @controllers.enforce_rbac('transport_keys:post') @controllers.enforce_content_types(['application/json']) def on_post(self, external_project_id, **kwargs): LOG.debug('Start transport_keys on_post') # TODO(alee) POST should determine the plugin name and call the # relevant get_transport_key() call. We will implement this once # we figure out how the plugins will be enumerated. data = api.load_body(pecan.request, validator=self.validator) new_key = models.TransportKey(data.get('plugin_name'), data.get('transport_key')) self.repo.create_from(new_key) url = hrefs.convert_transport_key_to_href(new_key.id) LOG.debug('URI to transport key is %s', url) pecan.response.status = 201 pecan.response.headers['Location'] = url return {'transport_key_ref': url} barbican-6.0.0/barbican/api/controllers/acls.py0000666000175100017510000003660213245511001021457 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan import six from barbican import api from barbican.api import controllers from barbican.common import hrefs from barbican.common import utils from barbican.common import validators from barbican import i18n as u from barbican.model import models from barbican.model import repositories as repo LOG = utils.getLogger(__name__) def _convert_acl_to_response_format(acl, acls_dict): fields = acl.to_dict_fields() operation = fields['operation'] acl_data = {} # dict for each acl operation data acl_data['project-access'] = fields['project_access'] acl_data['users'] = fields.get('users', []) acl_data['created'] = fields['created'] acl_data['updated'] = fields['updated'] acls_dict[operation] = acl_data DEFAULT_ACL = {'read': {'project-access': True}} class SecretACLsController(controllers.ACLMixin): """Handles SecretACL requests by a given secret id.""" def __init__(self, secret): self.secret = secret self.secret_project_id = self.secret.project.external_id self.acl_repo = repo.get_secret_acl_repository() self.validator = validators.ACLValidator() def get_acl_tuple(self, req, **kwargs): d = {'project_id': self.secret_project_id, 'creator_id': self.secret.creator_id} return 'secret', d @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('SecretACL(s) retrieval')) @controllers.enforce_rbac('secret_acls:get') def on_get(self, external_project_id, **kw): LOG.debug('Start secret ACL on_get ' 'for secret-ID %s:', self.secret.id) return self._return_acl_list_response(self.secret.id) @index.when(method='PATCH', template='json') @controllers.handle_exceptions(u._('SecretACL(s) Update')) @controllers.enforce_rbac('secret_acls:put_patch') @controllers.enforce_content_types(['application/json']) def on_patch(self, external_project_id, **kwargs): """Handles update of existing secret acl requests. At least one secret ACL needs to exist for update to proceed. In update, multiple operation ACL payload can be specified as mentioned in sample below. A specific ACL can be updated by its own id via SecretACLController patch request. { "read":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a", "20b63d71f90848cf827ee48074f213b7", "c7753f8da8dc4fbea75730ab0b6e0ef4" ] }, "write":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a" ], "project-access":true } } """ data = api.load_body(pecan.request, validator=self.validator) LOG.debug('Start on_patch...%s', data) existing_acls_map = {acl.operation: acl for acl in self.secret.secret_acls} for operation in six.moves.filter(lambda x: data.get(x), validators.ACL_OPERATIONS): project_access = data[operation].get('project-access') user_ids = data[operation].get('users') s_acl = None if operation in existing_acls_map: # update if matching acl exists s_acl = existing_acls_map[operation] if project_access is not None: s_acl.project_access = project_access else: s_acl = models.SecretACL(self.secret.id, operation=operation, project_access=project_access) self.acl_repo.create_or_replace_from(self.secret, secret_acl=s_acl, user_ids=user_ids) acl_ref = '{0}/acl'.format( hrefs.convert_secret_to_href(self.secret.id)) return {'acl_ref': acl_ref} @index.when(method='PUT', template='json') @controllers.handle_exceptions(u._('SecretACL(s) Update')) @controllers.enforce_rbac('secret_acls:put_patch') @controllers.enforce_content_types(['application/json']) def on_put(self, external_project_id, **kwargs): """Handles update of existing secret acl requests. Replaces existing secret ACL(s) with input ACL(s) data. Existing ACL operation not specified in input are removed as part of update. For missing project-access in ACL, true is used as default. In update, multiple operation ACL payload can be specified as mentioned in sample below. A specific ACL can be updated by its own id via SecretACLController patch request. { "read":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a", "20b63d71f90848cf827ee48074f213b7", "c7753f8da8dc4fbea75730ab0b6e0ef4" ] }, "write":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a" ], "project-access":false } } Every secret, by default, has an implicit ACL in case client has not defined an explicit ACL. That default ACL definition, DEFAULT_ACL, signifies that a secret by default has project based access i.e. client with necessary roles on secret project can access the secret. That's why when ACL is added to a secret, it always returns 200 (and not 201) indicating existence of implicit ACL on a secret. """ data = api.load_body(pecan.request, validator=self.validator) LOG.debug('Start on_put...%s', data) existing_acls_map = {acl.operation: acl for acl in self.secret.secret_acls} for operation in six.moves.filter(lambda x: data.get(x), validators.ACL_OPERATIONS): project_access = data[operation].get('project-access', True) user_ids = data[operation].get('users', []) s_acl = None if operation in existing_acls_map: # update if matching acl exists s_acl = existing_acls_map.pop(operation) s_acl.project_access = project_access else: s_acl = models.SecretACL(self.secret.id, operation=operation, project_access=project_access) self.acl_repo.create_or_replace_from(self.secret, secret_acl=s_acl, user_ids=user_ids) # delete remaining existing acls as they are not present in input. for acl in existing_acls_map.values(): self.acl_repo.delete_entity_by_id(entity_id=acl.id, external_project_id=None) acl_ref = '{0}/acl'.format( hrefs.convert_secret_to_href(self.secret.id)) return {'acl_ref': acl_ref} @index.when(method='DELETE', template='json') @controllers.handle_exceptions(u._('SecretACL(s) deletion')) @controllers.enforce_rbac('secret_acls:delete') def on_delete(self, external_project_id, **kwargs): count = self.acl_repo.get_count(self.secret.id) if count > 0: self.acl_repo.delete_acls_for_secret(self.secret) def _return_acl_list_response(self, secret_id): result = self.acl_repo.get_by_secret_id(secret_id) acls_data = {} if result: for acl in result: _convert_acl_to_response_format(acl, acls_data) if not acls_data: acls_data = DEFAULT_ACL.copy() return acls_data class ContainerACLsController(controllers.ACLMixin): """Handles ContainerACL requests by a given container id.""" def __init__(self, container): self.container = container self.container_id = container.id self.acl_repo = repo.get_container_acl_repository() self.container_repo = repo.get_container_repository() self.validator = validators.ACLValidator() self.container_project_id = container.project.external_id def get_acl_tuple(self, req, **kwargs): d = {'project_id': self.container_project_id, 'creator_id': self.container.creator_id} return 'container', d @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('ContainerACL(s) retrieval')) @controllers.enforce_rbac('container_acls:get') def on_get(self, external_project_id, **kw): LOG.debug('Start container ACL on_get ' 'for container-ID %s:', self.container_id) return self._return_acl_list_response(self.container.id) @index.when(method='PATCH', template='json') @controllers.handle_exceptions(u._('ContainerACL(s) Update')) @controllers.enforce_rbac('container_acls:put_patch') @controllers.enforce_content_types(['application/json']) def on_patch(self, external_project_id, **kwargs): """Handles update of existing container acl requests. At least one container ACL needs to exist for update to proceed. In update, multiple operation ACL payload can be specified as mentioned in sample below. A specific ACL can be updated by its own id via ContainerACLController patch request. { "read":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a", "20b63d71f90848cf827ee48074f213b7", "c7753f8da8dc4fbea75730ab0b6e0ef4" ] }, "write":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a" ], "project-access":false } } """ data = api.load_body(pecan.request, validator=self.validator) LOG.debug('Start ContainerACLsController on_patch...%s', data) existing_acls_map = {acl.operation: acl for acl in self.container.container_acls} for operation in six.moves.filter(lambda x: data.get(x), validators.ACL_OPERATIONS): project_access = data[operation].get('project-access') user_ids = data[operation].get('users') if operation in existing_acls_map: # update if matching acl exists c_acl = existing_acls_map[operation] if project_access is not None: c_acl.project_access = project_access else: c_acl = models.ContainerACL(self.container.id, operation=operation, project_access=project_access) self.acl_repo.create_or_replace_from(self.container, container_acl=c_acl, user_ids=user_ids) acl_ref = '{0}/acl'.format( hrefs.convert_container_to_href(self.container.id)) return {'acl_ref': acl_ref} @index.when(method='PUT', template='json') @controllers.handle_exceptions(u._('ContainerACL(s) Update')) @controllers.enforce_rbac('container_acls:put_patch') @controllers.enforce_content_types(['application/json']) def on_put(self, external_project_id, **kwargs): """Handles update of existing container acl requests. Replaces existing container ACL(s) with input ACL(s) data. Existing ACL operation not specified in input are removed as part of update. For missing project-access in ACL, true is used as default. In update, multiple operation ACL payload can be specified as mentioned in sample below. A specific ACL can be updated by its own id via ContainerACLController patch request. { "read":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a", "20b63d71f90848cf827ee48074f213b7", "c7753f8da8dc4fbea75730ab0b6e0ef4" ] }, "write":{ "users":[ "5ecb18f341894e94baca9e8c7b6a824a" ], "project-access":false } } Every container, by default, has an implicit ACL in case client has not defined an explicit ACL. That default ACL definition, DEFAULT_ACL, signifies that a container by default has project based access i.e. client with necessary roles on container project can access the container. That's why when ACL is added to a container, it always returns 200 (and not 201) indicating existence of implicit ACL on a container. """ data = api.load_body(pecan.request, validator=self.validator) LOG.debug('Start ContainerACLsController on_put...%s', data) existing_acls_map = {acl.operation: acl for acl in self.container.container_acls} for operation in six.moves.filter(lambda x: data.get(x), validators.ACL_OPERATIONS): project_access = data[operation].get('project-access', True) user_ids = data[operation].get('users', []) if operation in existing_acls_map: # update if matching acl exists c_acl = existing_acls_map.pop(operation) c_acl.project_access = project_access else: c_acl = models.ContainerACL(self.container.id, operation=operation, project_access=project_access) self.acl_repo.create_or_replace_from(self.container, container_acl=c_acl, user_ids=user_ids) # delete remaining existing acls as they are not present in input. for acl in existing_acls_map.values(): self.acl_repo.delete_entity_by_id(entity_id=acl.id, external_project_id=None) acl_ref = '{0}/acl'.format( hrefs.convert_container_to_href(self.container.id)) return {'acl_ref': acl_ref} @index.when(method='DELETE', template='json') @controllers.handle_exceptions(u._('ContainerACL(s) deletion')) @controllers.enforce_rbac('container_acls:delete') def on_delete(self, external_project_id, **kwargs): count = self.acl_repo.get_count(self.container_id) if count > 0: self.acl_repo.delete_acls_for_container(self.container) def _return_acl_list_response(self, container_id): result = self.acl_repo.get_by_container_id(container_id) acls_data = {} if result: for acl in result: _convert_acl_to_response_format(acl, acls_data) if not acls_data: acls_data = DEFAULT_ACL.copy() return acls_data barbican-6.0.0/barbican/api/controllers/containers.py0000666000175100017510000003060513245511001022677 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from barbican import api from barbican.api import controllers from barbican.api.controllers import acls from barbican.api.controllers import consumers from barbican.common import exception from barbican.common import hrefs from barbican.common import quota from barbican.common import resources as res from barbican.common import utils from barbican.common import validators from barbican import i18n as u from barbican.model import models from barbican.model import repositories as repo LOG = utils.getLogger(__name__) CONTAINER_GET = 'container:get' def container_not_found(): """Throw exception indicating container not found.""" pecan.abort(404, u._('Not Found. Sorry but your container is in ' 'another castle.')) def invalid_container_id(): """Throw exception indicating container id is invalid.""" pecan.abort(404, u._('Not Found. Provided container id is invalid.')) class ContainerController(controllers.ACLMixin): """Handles Container entity retrieval and deletion requests.""" def __init__(self, container): self.container = container self.container_id = container.id self.consumer_repo = repo.get_container_consumer_repository() self.container_repo = repo.get_container_repository() self.validator = validators.ContainerValidator() self.consumers = consumers.ContainerConsumersController( self.container_id) self.acl = acls.ContainerACLsController(self.container) def get_acl_tuple(self, req, **kwargs): d = self.get_acl_dict_for_user(req, self.container.container_acls) d['project_id'] = self.container.project.external_id d['creator_id'] = self.container.creator_id return 'container', d @pecan.expose(generic=True, template='json') def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Container retrieval')) @controllers.enforce_rbac(CONTAINER_GET) def on_get(self, external_project_id): dict_fields = self.container.to_dict_fields() for secret_ref in dict_fields['secret_refs']: hrefs.convert_to_hrefs(secret_ref) LOG.info('Retrieved container for project: %s', external_project_id) return hrefs.convert_to_hrefs( hrefs.convert_to_hrefs(dict_fields) ) @index.when(method='DELETE') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Container deletion')) @controllers.enforce_rbac('container:delete') def on_delete(self, external_project_id, **kwargs): container_consumers = self.consumer_repo.get_by_container_id( self.container_id, suppress_exception=True ) try: self.container_repo.delete_entity_by_id( entity_id=self.container_id, external_project_id=external_project_id ) except exception.NotFound: LOG.exception('Problem deleting container') container_not_found() LOG.info('Deleted container for project: %s', external_project_id) for consumer in container_consumers[0]: try: self.consumer_repo.delete_entity_by_id( consumer.id, external_project_id) except exception.NotFound: # nosec pass class ContainersController(controllers.ACLMixin): """Handles Container creation requests.""" def __init__(self): self.consumer_repo = repo.get_container_consumer_repository() self.container_repo = repo.get_container_repository() self.secret_repo = repo.get_secret_repository() self.validator = validators.ContainerValidator() self.quota_enforcer = quota.QuotaEnforcer('containers', self.container_repo) @pecan.expose() def _lookup(self, container_id, *remainder): if not utils.validate_id_is_uuid(container_id): invalid_container_id() container = self.container_repo.get_container_by_id( entity_id=container_id, suppress_exception=True) if not container: container_not_found() if len(remainder) > 0 and remainder[0] == 'secrets': return ContainersSecretsController(container), () return ContainerController(container), remainder @pecan.expose(generic=True, template='json') def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Containers(s) retrieval')) @controllers.enforce_rbac('containers:get') def on_get(self, project_id, **kw): LOG.debug('Start containers on_get for project-ID %s:', project_id) result = self.container_repo.get_by_create_date( project_id, offset_arg=kw.get('offset', 0), limit_arg=kw.get('limit', None), name_arg=kw.get('name', None), suppress_exception=True ) containers, offset, limit, total = result if not containers: resp_ctrs_overall = {'containers': [], 'total': total} else: resp_ctrs = [ hrefs.convert_to_hrefs(c.to_dict_fields()) for c in containers ] for ctr in resp_ctrs: for secret_ref in ctr.get('secret_refs', []): hrefs.convert_to_hrefs(secret_ref) resp_ctrs_overall = hrefs.add_nav_hrefs( 'containers', offset, limit, total, {'containers': resp_ctrs} ) resp_ctrs_overall.update({'total': total}) LOG.info('Retrieved container list for project: %s', project_id) return resp_ctrs_overall @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('Container creation')) @controllers.enforce_rbac('containers:post') @controllers.enforce_content_types(['application/json']) def on_post(self, external_project_id, **kwargs): project = res.get_or_create_project(external_project_id) data = api.load_body(pecan.request, validator=self.validator) ctxt = controllers._get_barbican_context(pecan.request) if ctxt: # in authenticated pipleline case, always use auth token user data['creator_id'] = ctxt.user self.quota_enforcer.enforce(project) LOG.debug('Start on_post...%s', data) new_container = models.Container(data) new_container.project_id = project.id # TODO(hgedikli): performance optimizations for secret_ref in new_container.container_secrets: secret = self.secret_repo.get( entity_id=secret_ref.secret_id, external_project_id=external_project_id, suppress_exception=True) if not secret: # This only partially localizes the error message and # doesn't localize secret_ref.name. pecan.abort( 404, u._("Secret provided for '{secret_name}' doesn't " "exist.").format(secret_name=secret_ref.name) ) self.container_repo.create_from(new_container) url = hrefs.convert_container_to_href(new_container.id) pecan.response.status = 201 pecan.response.headers['Location'] = url LOG.info('Created a container for project: %s', external_project_id) return {'container_ref': url} class ContainersSecretsController(controllers.ACLMixin): """Handles ContainerSecret creation and deletion requests.""" def __init__(self, container): LOG.debug('=== Creating ContainerSecretsController ===') self.container = container self.container_secret_repo = repo.get_container_secret_repository() self.secret_repo = repo.get_secret_repository() self.validator = validators.ContainerSecretValidator() @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('Container Secret creation')) @controllers.enforce_rbac('container_secret:post') @controllers.enforce_content_types(['application/json']) def on_post(self, external_project_id, **kwargs): """Handles adding an existing secret to an existing container.""" if self.container.type != 'generic': pecan.abort(400, u._("Only 'generic' containers can be modified.")) data = api.load_body(pecan.request, validator=self.validator) name = data.get('name') secret_ref = data.get('secret_ref') secret_id = hrefs.get_secret_id_from_ref(secret_ref) secret = self.secret_repo.get( entity_id=secret_id, external_project_id=external_project_id, suppress_exception=True) if not secret: pecan.abort(404, u._("Secret provided doesn't exist.")) found_container_secrets = list( filter(lambda cs: cs.secret_id == secret_id and cs.name == name, self.container.container_secrets) ) if found_container_secrets: pecan.abort(409, u._('Conflict. A secret with that name and ID is ' 'already stored in this container. The same ' 'secret can exist in a container as long as ' 'the name is unique.')) LOG.debug('Start container secret on_post...%s', secret_ref) new_container_secret = models.ContainerSecret() new_container_secret.container_id = self.container.id new_container_secret.name = name new_container_secret.secret_id = secret_id self.container_secret_repo.save(new_container_secret) url = hrefs.convert_container_to_href(self.container.id) LOG.debug('URI to container is %s', url) pecan.response.status = 201 pecan.response.headers['Location'] = url LOG.info('Created a container secret for project: %s', external_project_id) return {'container_ref': url} @index.when(method='DELETE') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Container Secret deletion')) @controllers.enforce_rbac('container_secret:delete') def on_delete(self, external_project_id, **kwargs): """Handles removing a secret reference from an existing container.""" data = api.load_body(pecan.request, validator=self.validator) name = data.get('name') secret_ref = data.get('secret_ref') secret_id = hrefs.get_secret_id_from_ref(secret_ref) secret = self.secret_repo.get( entity_id=secret_id, external_project_id=external_project_id, suppress_exception=True) if not secret: pecan.abort(404, u._("Secret '{secret_name}' with reference " "'{secret_ref}' doesn't exist.").format( secret_name=name, secret_ref=secret_ref)) found_container_secrets = list( filter(lambda cs: cs.secret_id == secret_id and cs.name == name, self.container.container_secrets) ) if not found_container_secrets: pecan.abort(404, u._('Secret provided is not in the container')) for container_secret in found_container_secrets: self.container_secret_repo.delete_entity_by_id( container_secret.id, external_project_id) pecan.response.status = 204 LOG.info('Deleted container secret for project: %s', external_project_id) barbican-6.0.0/barbican/api/controllers/__init__.py0000666000175100017510000001722213245511001022271 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_policy import policy import pecan from webob import exc from barbican import api from barbican.common import utils from barbican import i18n as u LOG = utils.getLogger(__name__) def is_json_request_accept(req): """Test if http request 'accept' header configured for JSON response. :param req: HTTP request :return: True if need to return JSON response. """ return (not req.accept or req.accept.header_value == 'application/json' or req.accept.header_value == '*/*') def _get_barbican_context(req): if 'barbican.context' in req.environ: return req.environ['barbican.context'] else: return None def _do_enforce_rbac(inst, req, action_name, ctx, **kwargs): """Enforce RBAC based on 'request' information.""" if action_name and ctx: # Prepare credentials information. credentials = { 'roles': ctx.roles, 'user': ctx.user, 'project': ctx.project } # Enforce special case: secret GET decryption if 'secret:get' == action_name and not is_json_request_accept(req): action_name = 'secret:decrypt' # Override to perform special rules target_name, target_data = inst.get_acl_tuple(req, **kwargs) policy_dict = {} if target_name and target_data: policy_dict['target'] = {target_name: target_data} policy_dict.update(kwargs) # Enforce access controls. if ctx.policy_enforcer: ctx.policy_enforcer.enforce(action_name, flatten(policy_dict), credentials, do_raise=True) def enforce_rbac(action_name='default'): """Decorator handling RBAC enforcement on behalf of REST verb methods.""" def rbac_decorator(fn): def enforcer(inst, *args, **kwargs): # Enforce RBAC rules. # context placed here by context.py # middleware ctx = _get_barbican_context(pecan.request) external_project_id = None if ctx: external_project_id = ctx.project _do_enforce_rbac(inst, pecan.request, action_name, ctx, **kwargs) # insert external_project_id as the first arg to the guarded method args = list(args) args.insert(0, external_project_id) # Execute guarded method now. return fn(inst, *args, **kwargs) return enforcer return rbac_decorator def handle_exceptions(operation_name=u._('System')): """Decorator handling generic exceptions from REST methods.""" def exceptions_decorator(fn): def handler(inst, *args, **kwargs): try: return fn(inst, *args, **kwargs) except exc.HTTPError: LOG.exception('Webob error seen') raise # Already converted to Webob exception, just reraise # In case PolicyNotAuthorized, we do not want to expose payload by # logging exception, so just LOG.error except policy.PolicyNotAuthorized as pna: status, message = api.generate_safe_exception_message( operation_name, pna) LOG.error(message) pecan.abort(status, message) except Exception as e: # In case intervening modules have disabled logging. LOG.logger.disabled = False status, message = api.generate_safe_exception_message( operation_name, e) LOG.exception(message) pecan.abort(status, message) return handler return exceptions_decorator def _do_enforce_content_types(pecan_req, valid_content_types): """Content type enforcement Check to see that content type in the request is one of the valid types passed in by our caller. """ if pecan_req.content_type not in valid_content_types: m = u._( "Unexpected content type. Expected content types " "are: {expected}" ).format( expected=valid_content_types ) pecan.abort(415, m) def enforce_content_types(valid_content_types=[]): """Decorator handling content type enforcement on behalf of REST verbs.""" def content_types_decorator(fn): def content_types_enforcer(inst, *args, **kwargs): _do_enforce_content_types(pecan.request, valid_content_types) return fn(inst, *args, **kwargs) return content_types_enforcer return content_types_decorator def flatten(d, parent_key=''): """Flatten a nested dictionary Converts a dictionary with nested values to a single level flat dictionary, with dotted notation for each key. """ items = [] for k, v in d.items(): new_key = parent_key + '.' + k if parent_key else k if isinstance(v, collections.MutableMapping): items.extend(flatten(v, new_key).items()) else: items.append((new_key, v)) return dict(items) class ACLMixin(object): def get_acl_tuple(self, req, **kwargs): return None, None def get_acl_dict_for_user(self, req, acl_list): """Get acl operation found for token user in acl list. Token user is looked into users list present for each acl operation. If there is a match, it means that ACL data is applicable for policy logic. Policy logic requires data as dictionary so this method capture acl's operation, project_access data in that format. For operation value, matching ACL record's operation is stored in dict as key and value both. project_access flag is intended to make secret/container private for a given operation. It doesn't require user match. So its captured in dict format where key is prefixed with related operation and flag is used as its value. Then for acl related policy logic, this acl dict data is combined with target entity (secret or container) creator_id and project id. The whole dict serves as target in policy enforcement logic i.e. right hand side of policy rule. Following is sample outcome where secret or container has ACL defined and token user is among the ACL users defined for 'read' and 'list' operation. {'read': 'read', 'list': 'list', 'read_project_access': True, 'list_project_access': True } Its possible that ACLs are defined without any user, they just have project_access flag set. This means only creator can read or list ACL entities. In that case, dictionary output can be as follows. {'read_project_access': False, 'list_project_access': False } """ ctxt = _get_barbican_context(req) if not ctxt: return {} acl_dict = {acl.operation: acl.operation for acl in acl_list if ctxt.user in acl.to_dict_fields().get('users', [])} co_dict = {'%s_project_access' % acl.operation: acl.project_access for acl in acl_list if acl.project_access is not None} acl_dict.update(co_dict) return acl_dict barbican-6.0.0/barbican/api/controllers/versions.py0000666000175100017510000001317713245511001022407 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from six.moves.urllib import parse from barbican.api import controllers from barbican.api.controllers import containers from barbican.api.controllers import orders from barbican.api.controllers import quotas from barbican.api.controllers import secrets from barbican.api.controllers import secretstores from barbican.api.controllers import transportkeys from barbican.common import utils from barbican import i18n as u from barbican import version LOG = utils.getLogger(__name__) MIME_TYPE_JSON = 'application/json' MIME_TYPE_JSON_HOME = 'application/json-home' MEDIA_TYPE_JSON = 'application/vnd.openstack.key-manager-%s+json' def _version_not_found(): """Throw exception indicating version not found.""" pecan.abort(404, u._("The version you requested wasn't found")) def _get_versioned_url(version): if version[-1] != '/': version += '/' # If host_href is not set in barbican conf, then derive it from request url host_part = utils.get_base_url_from_request() if host_part[-1] != '/': host_part += '/' return parse.urljoin(host_part, version) class BaseVersionController(object): """Base class for the version-specific controllers""" @classmethod def get_version_info(cls, request): return { 'id': cls.version_id, 'status': 'stable', 'updated': cls.last_updated, 'links': [ { 'rel': 'self', 'href': _get_versioned_url(cls.version_string), }, { 'rel': 'describedby', 'type': 'text/html', 'href': 'https://docs.openstack.org/' } ], 'media-types': [ { 'base': MIME_TYPE_JSON, 'type': MEDIA_TYPE_JSON % cls.version_string } ] } class V1Controller(BaseVersionController): """Root controller for the v1 API""" version_string = 'v1' # NOTE(jaosorior): We might start using decimals in the future, meanwhile # this is the same as the version string. version_id = 'v1' last_updated = '2015-04-28T00:00:00Z' def __init__(self): LOG.debug('=== Creating V1Controller ===') self.secrets = secrets.SecretsController() self.orders = orders.OrdersController() self.containers = containers.ContainersController() self.transport_keys = transportkeys.TransportKeysController() self.quotas = quotas.QuotasController() setattr(self, 'project-quotas', quotas.ProjectsQuotasController()) setattr(self, 'secret-stores', secretstores.SecretStoresController()) @pecan.expose(generic=True) def index(self): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @utils.allow_certain_content_types(MIME_TYPE_JSON, MIME_TYPE_JSON_HOME) @controllers.handle_exceptions(u._('Version retrieval')) def on_get(self): pecan.core.override_template('json') return {'version': self.get_version_info(pecan.request)} AVAILABLE_VERSIONS = { V1Controller.version_string: V1Controller, } DEFAULT_VERSION = V1Controller.version_string class VersionsController(object): def __init__(self): LOG.debug('=== Creating VersionsController ===') @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @utils.allow_certain_content_types(MIME_TYPE_JSON, MIME_TYPE_JSON_HOME) def on_get(self, **kwargs): """The list of versions is dependent on the context.""" self._redirect_to_default_json_home_if_needed(pecan.request) if 'build' in kwargs: return {'build': version.__version__} versions_info = [version_class.get_version_info(pecan.request) for version_class in AVAILABLE_VERSIONS.values()] version_output = { 'versions': { 'values': versions_info } } # Since we are returning all the versions available, the proper status # code is Multiple Choices (300) pecan.response.status = 300 return version_output def _redirect_to_default_json_home_if_needed(self, request): if self._mime_best_match(request.accept) == MIME_TYPE_JSON_HOME: url = _get_versioned_url(DEFAULT_VERSION) LOG.debug("Redirecting Request to " + url) # NOTE(jaosorior): This issues an "external" redirect because of # two reasons: # * This module doesn't require authorization, and accessing # specific version info needs that. # * The resource is a separate app_factory and won't be found # internally pecan.redirect(url, request=request) def _mime_best_match(self, accept): if not accept: return MIME_TYPE_JSON SUPPORTED_TYPES = [MIME_TYPE_JSON, MIME_TYPE_JSON_HOME] return accept.best_match(SUPPORTED_TYPES) barbican-6.0.0/barbican/api/controllers/secretstores.py0000666000175100017510000001776613245511001023274 0ustar zuulzuul00000000000000# (c) Copyright 2015-2016 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from barbican.api import controllers from barbican.common import hrefs from barbican.common import resources as res from barbican.common import utils from barbican import i18n as u from barbican.model import repositories as repo from barbican.plugin.util import multiple_backends LOG = utils.getLogger(__name__) def _secret_store_not_found(): """Throw exception indicating secret store not found.""" pecan.abort(404, u._('Not Found. Secret store not found.')) def _preferred_secret_store_not_found(): """Throw exception indicating preferred secret store not found.""" pecan.abort(404, u._('Not Found. No preferred secret store defined for ' 'this project.')) def _multiple_backends_not_enabled(): """Throw exception indicating multiple backends support is not enabled.""" pecan.abort(404, u._('Not Found. Multiple backends support is not enabled ' 'in service configuration.')) def convert_secret_store_to_response_format(secret_store): data = secret_store.to_dict_fields() data['secret_store_plugin'] = data.pop('store_plugin') data['secret_store_ref'] = hrefs.convert_secret_stores_to_href( data['secret_store_id']) # no need to pass store id as secret_store_ref is returned data.pop('secret_store_id', None) return data class PreferredSecretStoreController(controllers.ACLMixin): """Handles preferred secret store set/removal requests.""" def __init__(self, secret_store): LOG.debug('=== Creating PreferredSecretStoreController ===') self.secret_store = secret_store self.proj_store_repo = repo.get_project_secret_store_repository() @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='DELETE', template='json') @controllers.handle_exceptions(u._('Removing preferred secret store')) @controllers.enforce_rbac('secretstore_preferred:delete') def on_delete(self, external_project_id, **kw): LOG.debug('Start: Remove project preferred secret-store for store' ' id %s', self.secret_store.id) project = res.get_or_create_project(external_project_id) project_store = self.proj_store_repo.get_secret_store_for_project( project.id, None, suppress_exception=True) if project_store is None: _preferred_secret_store_not_found() self.proj_store_repo.delete_entity_by_id( entity_id=project_store.id, external_project_id=external_project_id) pecan.response.status = 204 @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('Setting preferred secret store')) @controllers.enforce_rbac('secretstore_preferred:post') def on_post(self, external_project_id, **kwargs): LOG.debug('Start: Set project preferred secret-store for store ' 'id %s', self.secret_store.id) project = res.get_or_create_project(external_project_id) self.proj_store_repo.create_or_update_for_project(project.id, self.secret_store.id) pecan.response.status = 204 class SecretStoreController(controllers.ACLMixin): """Handles secret store retrieval requests.""" def __init__(self, secret_store): LOG.debug('=== Creating SecretStoreController ===') self.secret_store = secret_store @pecan.expose() def _lookup(self, action, *remainder): if (action == 'preferred'): return PreferredSecretStoreController(self.secret_store), remainder else: pecan.abort(405) @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Secret store retrieval')) @controllers.enforce_rbac('secretstore:get') def on_get(self, external_project_id): LOG.debug("== Getting secret store for %s", self.secret_store.id) return convert_secret_store_to_response_format(self.secret_store) class SecretStoresController(controllers.ACLMixin): """Handles secret-stores list requests.""" def __init__(self): LOG.debug('Creating SecretStoresController') self.secret_stores_repo = repo.get_secret_stores_repository() self.proj_store_repo = repo.get_project_secret_store_repository() def __getattr__(self, name): route_table = { 'global-default': self.get_global_default, 'preferred': self.get_preferred, } if name in route_table: return route_table[name] raise AttributeError @pecan.expose() def _lookup(self, secret_store_id, *remainder): if not utils.is_multiple_backends_enabled(): _multiple_backends_not_enabled() secret_store = self.secret_stores_repo.get(entity_id=secret_store_id, suppress_exception=True) if not secret_store: _secret_store_not_found() return SecretStoreController(secret_store), remainder @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('List available secret stores')) @controllers.enforce_rbac('secretstores:get') def on_get(self, external_project_id, **kw): LOG.debug('Start SecretStoresController on_get: listing secret ' 'stores') if not utils.is_multiple_backends_enabled(): _multiple_backends_not_enabled() res.get_or_create_project(external_project_id) secret_stores = self.secret_stores_repo.get_all() resp_list = [] for store in secret_stores: item = convert_secret_store_to_response_format(store) resp_list.append(item) resp = {'secret_stores': resp_list} return resp @pecan.expose(generic=True, template='json') @controllers.handle_exceptions(u._('Retrieve global default secret store')) @controllers.enforce_rbac('secretstores:get_global_default') def get_global_default(self, external_project_id, **kw): LOG.debug('Start secret-stores get global default secret store') if not utils.is_multiple_backends_enabled(): _multiple_backends_not_enabled() res.get_or_create_project(external_project_id) store = multiple_backends.get_global_default_secret_store() return convert_secret_store_to_response_format(store) @pecan.expose(generic=True, template='json') @controllers.handle_exceptions(u._('Retrieve project preferred store')) @controllers.enforce_rbac('secretstores:get_preferred') def get_preferred(self, external_project_id, **kw): LOG.debug('Start secret-stores get preferred secret store') if not utils.is_multiple_backends_enabled(): _multiple_backends_not_enabled() project = res.get_or_create_project(external_project_id) project_store = self.proj_store_repo.get_secret_store_for_project( project.id, None, suppress_exception=True) if project_store is None: _preferred_secret_store_not_found() return convert_secret_store_to_response_format( project_store.secret_store) barbican-6.0.0/barbican/api/controllers/orders.py0000666000175100017510000001704713245511001022035 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from barbican import api from barbican.api import controllers from barbican.common import hrefs from barbican.common import quota from barbican.common import resources as res from barbican.common import utils from barbican.common import validators from barbican import i18n as u from barbican.model import models from barbican.model import repositories as repo from barbican.queue import client as async_client LOG = utils.getLogger(__name__) _DEPRECATION_MSG = '%s has been deprecated in the Newton release. ' \ 'It will be removed in the Pike release.' def _order_not_found(): """Throw exception indicating order not found.""" pecan.abort(404, u._('Not Found. Sorry but your order is in ' 'another castle.')) def _secret_not_in_order(): """Throw exception that secret info is not available in the order.""" pecan.abort(400, u._("Secret metadata expected but not received.")) def _order_update_not_supported(): """Throw exception that PUT operation is not supported for orders.""" pecan.abort(405, u._("Order update is not supported.")) def _order_cannot_be_updated_if_not_pending(order_status): """Throw exception that order cannot be updated if not PENDING.""" pecan.abort(400, u._("Only PENDING orders can be updated. Order is in the" "{0} state.").format(order_status)) def order_cannot_modify_order_type(): """Throw exception that order type cannot be modified.""" pecan.abort(400, u._("Cannot modify order type.")) class OrderController(controllers.ACLMixin): """Handles Order retrieval and deletion requests.""" def __init__(self, order, queue_resource=None): self.order = order self.order_repo = repo.get_order_repository() self.queue = queue_resource or async_client.TaskClient() self.type_order_validator = validators.TypeOrderValidator() @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Order retrieval')) @controllers.enforce_rbac('order:get') def on_get(self, external_project_id): return hrefs.convert_to_hrefs(self.order.to_dict_fields()) @index.when(method='DELETE') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Order deletion')) @controllers.enforce_rbac('order:delete') def on_delete(self, external_project_id, **kwargs): self.order_repo.delete_entity_by_id( entity_id=self.order.id, external_project_id=external_project_id) class OrdersController(controllers.ACLMixin): """Handles Order requests for Secret creation.""" def __init__(self, queue_resource=None): LOG.debug('Creating OrdersController') self.order_repo = repo.get_order_repository() self.queue = queue_resource or async_client.TaskClient() self.type_order_validator = validators.TypeOrderValidator() self.quota_enforcer = quota.QuotaEnforcer('orders', self.order_repo) @pecan.expose() def _lookup(self, order_id, *remainder): # NOTE(jaosorior): It's worth noting that even though this section # actually does a lookup in the database regardless of the RBAC policy # check, the execution only gets here if authentication of the user was # previously successful. ctx = controllers._get_barbican_context(pecan.request) order = self.order_repo.get(entity_id=order_id, external_project_id=ctx.project, suppress_exception=True) if not order: _order_not_found() return OrderController(order, self.order_repo), remainder @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Order(s) retrieval')) @controllers.enforce_rbac('orders:get') def on_get(self, external_project_id, **kw): LOG.debug('Start orders on_get ' 'for project-ID %s:', external_project_id) result = self.order_repo.get_by_create_date( external_project_id, offset_arg=kw.get('offset', 0), limit_arg=kw.get('limit', None), meta_arg=kw.get('meta', None), suppress_exception=True) orders, offset, limit, total = result if not orders: orders_resp_overall = {'orders': [], 'total': total} else: orders_resp = [ hrefs.convert_to_hrefs(o.to_dict_fields()) for o in orders ] orders_resp_overall = hrefs.add_nav_hrefs('orders', offset, limit, total, {'orders': orders_resp}) orders_resp_overall.update({'total': total}) return orders_resp_overall @index.when(method='PUT', template='json') @controllers.handle_exceptions(u._('Order update')) @controllers.enforce_rbac('orders:put') def on_put(self, external_project_id, **kwargs): _order_update_not_supported() @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('Order creation')) @controllers.enforce_rbac('orders:post') @controllers.enforce_content_types(['application/json']) def on_post(self, external_project_id, **kwargs): project = res.get_or_create_project(external_project_id) body = api.load_body(pecan.request, validator=self.type_order_validator) order_type = body.get('type') order_meta = body.get('meta') request_type = order_meta.get('request_type') LOG.debug('Processing order type %(order_type)s,' ' request type %(request_type)s' % {'order_type': order_type, 'request_type': request_type}) self.quota_enforcer.enforce(project) new_order = models.Order() new_order.meta = body.get('meta') new_order.type = order_type new_order.project_id = project.id request_id = None ctxt = controllers._get_barbican_context(pecan.request) if ctxt: new_order.creator_id = ctxt.user request_id = ctxt.request_id self.order_repo.create_from(new_order) # Grab our id before commit due to obj expiration from sqlalchemy order_id = new_order.id # Force commit to avoid async issues with the workers repo.commit() self.queue.process_type_order(order_id=order_id, project_id=external_project_id, request_id=request_id) url = hrefs.convert_order_to_href(order_id) pecan.response.status = 202 pecan.response.headers['Location'] = url return {'order_ref': url} barbican-6.0.0/barbican/api/controllers/secretmeta.py0000666000175100017510000001660613245511001022673 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import pecan from barbican import api from barbican.api import controllers from barbican.common import hrefs from barbican.common import utils from barbican.common import validators from barbican import i18n as u from barbican.model import repositories as repo LOG = utils.getLogger(__name__) def _secret_metadata_not_found(): """Throw exception indicating secret metadata not found.""" pecan.abort(404, u._('Not Found. Sorry but your secret metadata is in ' 'another castle.')) class SecretMetadataController(controllers.ACLMixin): """Handles SecretMetadata requests by a given secret id.""" def __init__(self, secret): LOG.debug('=== Creating SecretMetadataController ===') self.secret = secret self.secret_project_id = self.secret.project.external_id self.secret_repo = repo.get_secret_repository() self.user_meta_repo = repo.get_secret_user_meta_repository() self.metadata_validator = validators.NewSecretMetadataValidator() self.metadatum_validator = validators.NewSecretMetadatumValidator() @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Secret metadata retrieval')) @controllers.enforce_rbac('secret_meta:get') def on_get(self, external_project_id, **kwargs): """Handles retrieval of existing secret metadata requests.""" LOG.debug('Start secret metadata on_get ' 'for secret-ID %s:', self.secret.id) resp = self.user_meta_repo.get_metadata_for_secret(self.secret.id) pecan.response.status = 200 return {"metadata": resp} @index.when(method='PUT', template='json') @controllers.handle_exceptions(u._('Secret metadata creation')) @controllers.enforce_rbac('secret_meta:put') @controllers.enforce_content_types(['application/json']) def on_put(self, external_project_id, **kwargs): """Handles creation/update of secret metadata.""" data = api.load_body(pecan.request, validator=self.metadata_validator) LOG.debug('Start secret metadata on_put...%s', data) self.user_meta_repo.create_replace_user_metadata(self.secret.id, data) url = hrefs.convert_user_meta_to_href(self.secret.id) LOG.debug('URI to secret metadata is %s', url) pecan.response.status = 201 return {'metadata_ref': url} @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('Secret metadatum creation')) @controllers.enforce_rbac('secret_meta:post') @controllers.enforce_content_types(['application/json']) def on_post(self, external_project_id, **kwargs): """Handles creation of secret metadatum.""" data = api.load_body(pecan.request, validator=self.metadatum_validator) key = data.get('key') value = data.get('value') metadata = self.user_meta_repo.get_metadata_for_secret(self.secret.id) if key in metadata: pecan.abort(409, u._('Conflict. Key in request is already in the ' 'secret metadata')) LOG.debug('Start secret metadatum on_post...%s', metadata) self.user_meta_repo.create_replace_user_metadatum(self.secret.id, key, value) url = hrefs.convert_user_meta_to_href(self.secret.id) LOG.debug('URI to secret metadata is %s', url) pecan.response.status = 201 return {'metadata_ref': url + "/%s {key: %s, value:%s}" % (key, key, value)} class SecretMetadatumController(controllers.ACLMixin): def __init__(self, secret): LOG.debug('=== Creating SecretMetadatumController ===') self.user_meta_repo = repo.get_secret_user_meta_repository() self.secret = secret self.metadatum_validator = validators.NewSecretMetadatumValidator() @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Secret metadatum retrieval')) @controllers.enforce_rbac('secret_meta:get') def on_get(self, external_project_id, remainder, **kwargs): """Handles retrieval of existing secret metadatum.""" LOG.debug('Start secret metadatum on_get ' 'for secret-ID %s:', self.secret.id) metadata = self.user_meta_repo.get_metadata_for_secret(self.secret.id) if remainder in metadata: pecan.response.status = 200 pair = {'key': remainder, 'value': metadata[remainder]} return collections.OrderedDict(sorted(pair.items())) else: _secret_metadata_not_found() @index.when(method='PUT', template='json') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Secret metadatum update')) @controllers.enforce_rbac('secret_meta:put') @controllers.enforce_content_types(['application/json']) def on_put(self, external_project_id, remainder, **kwargs): """Handles update of existing secret metadatum.""" metadata = self.user_meta_repo.get_metadata_for_secret(self.secret.id) data = api.load_body(pecan.request, validator=self.metadatum_validator) key = data.get('key') value = data.get('value') if remainder not in metadata: _secret_metadata_not_found() elif remainder != key: msg = 'Key in request data does not match key in the ' 'request url.' pecan.abort(409, msg) else: LOG.debug('Start secret metadatum on_put...%s', metadata) self.user_meta_repo.create_replace_user_metadatum(self.secret.id, key, value) pecan.response.status = 200 pair = {'key': key, 'value': value} return collections.OrderedDict(sorted(pair.items())) @index.when(method='DELETE', template='json') @controllers.handle_exceptions(u._('Secret metadatum removal')) @controllers.enforce_rbac('secret_meta:delete') def on_delete(self, external_project_id, remainder, **kwargs): """Handles removal of existing secret metadatum.""" self.user_meta_repo.delete_metadatum(self.secret.id, remainder) msg = 'Deleted secret metadatum: %s for secret %s' % (remainder, self.secret.id) pecan.response.status = 204 LOG.info(msg) barbican-6.0.0/barbican/api/controllers/secrets.py0000666000175100017510000004277613245511001022216 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import timeutils import pecan from six.moves.urllib import parse from barbican import api from barbican.api import controllers from barbican.api.controllers import acls from barbican.api.controllers import secretmeta from barbican.common import exception from barbican.common import hrefs from barbican.common import quota from barbican.common import resources as res from barbican.common import utils from barbican.common import validators from barbican import i18n as u from barbican.model import models from barbican.model import repositories as repo from barbican.plugin import resources as plugin from barbican.plugin import util as putil LOG = utils.getLogger(__name__) def _secret_not_found(): """Throw exception indicating secret not found.""" pecan.abort(404, u._('Not Found. Sorry but your secret is in ' 'another castle.')) def _invalid_secret_id(): """Throw exception indicating secret id is invalid.""" pecan.abort(404, u._('Not Found. Provided secret id is invalid.')) def _secret_payload_not_found(): """Throw exception indicating secret's payload is not found.""" pecan.abort(404, u._('Not Found. Sorry but your secret has no payload.')) def _secret_already_has_data(): """Throw exception that the secret already has data.""" pecan.abort(409, u._("Secret already has data, cannot modify it.")) def _bad_query_string_parameters(): pecan.abort(400, u._("URI provided invalid query string parameters.")) def _request_has_twsk_but_no_transport_key_id(): """Throw exception for bad wrapping parameters. Throw exception if transport key wrapped session key has been provided, but the transport key id has not. """ pecan.abort(400, u._('Transport key wrapped session key has been ' 'provided to wrap secrets for retrieval, but the ' 'transport key id has not been provided.')) class SecretController(controllers.ACLMixin): """Handles Secret retrieval and deletion requests.""" def __init__(self, secret): LOG.debug('=== Creating SecretController ===') self.secret = secret self.transport_key_repo = repo.get_transport_key_repository() def get_acl_tuple(self, req, **kwargs): d = self.get_acl_dict_for_user(req, self.secret.secret_acls) d['project_id'] = self.secret.project.external_id d['creator_id'] = self.secret.creator_id return 'secret', d @pecan.expose() def _lookup(self, sub_resource, *remainder): if sub_resource == 'acl': return acls.SecretACLsController(self.secret), remainder elif sub_resource == 'metadata': if len(remainder) == 0 or remainder == ('',): return secretmeta.SecretMetadataController(self.secret), \ remainder else: request_method = pecan.request.method allowed_methods = ['GET', 'PUT', 'DELETE'] if request_method in allowed_methods: return secretmeta.SecretMetadatumController(self.secret), \ remainder else: # methods cannot be handled at controller level pecan.abort(405) else: # only 'acl' and 'metadata' as sub-resource is supported pecan.abort(405) @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Secret retrieval')) @controllers.enforce_rbac('secret:get') def on_get(self, external_project_id, **kwargs): if controllers.is_json_request_accept(pecan.request): resp = self._on_get_secret_metadata(self.secret, **kwargs) LOG.info('Retrieved secret metadata for project: %s', external_project_id) return resp else: LOG.warning('Decrypted secret %s requested using deprecated ' 'API call.', self.secret.id) return self._on_get_secret_payload(self.secret, external_project_id, **kwargs) def _on_get_secret_metadata(self, secret, **kwargs): """GET Metadata-only for a secret.""" pecan.override_template('json', 'application/json') secret_fields = putil.mime_types.augment_fields_with_content_types( secret) transport_key_id = self._get_transport_key_id_if_needed( kwargs.get('transport_key_needed'), secret) if transport_key_id: secret_fields['transport_key_id'] = transport_key_id return hrefs.convert_to_hrefs(secret_fields) def _get_transport_key_id_if_needed(self, transport_key_needed, secret): if transport_key_needed and transport_key_needed.lower() == 'true': return plugin.get_transport_key_id_for_retrieval(secret) return None def _on_get_secret_payload(self, secret, external_project_id, **kwargs): """GET actual payload containing the secret.""" # With ACL support, the user token project does not have to be same as # project associated with secret. The lookup project_id needs to be # derived from the secret's data considering authorization is already # done. external_project_id = secret.project.external_id project = res.get_or_create_project(external_project_id) # default to application/octet-stream if there is no Accept header accept_header = getattr(pecan.request.accept, 'header_value', 'application/octet-stream') pecan.override_template('', accept_header) # check if payload exists before proceeding if not secret.encrypted_data and not secret.secret_store_metadata: _secret_payload_not_found() twsk = kwargs.get('trans_wrapped_session_key', None) transport_key = None if twsk: transport_key = self._get_transport_key( kwargs.get('transport_key_id', None)) return plugin.get_secret(accept_header, secret, project, twsk, transport_key) def _get_transport_key(self, transport_key_id): if transport_key_id is None: _request_has_twsk_but_no_transport_key_id() transport_key_model = self.transport_key_repo.get( entity_id=transport_key_id, suppress_exception=True) return transport_key_model.transport_key @pecan.expose() @utils.allow_all_content_types @controllers.handle_exceptions(u._('Secret payload retrieval')) @controllers.enforce_rbac('secret:decrypt') def payload(self, external_project_id, **kwargs): if pecan.request.method != 'GET': pecan.abort(405) resp = self._on_get_secret_payload(self.secret, external_project_id, **kwargs) LOG.info('Retrieved secret payload for project: %s', external_project_id) return resp @index.when(method='PUT') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Secret update')) @controllers.enforce_rbac('secret:put') @controllers.enforce_content_types(['application/octet-stream', 'text/plain']) def on_put(self, external_project_id, **kwargs): if (not pecan.request.content_type or pecan.request.content_type == 'application/json'): pecan.abort( 415, u._("Content-Type of '{content_type}' is not supported for " "PUT.").format(content_type=pecan.request.content_type) ) transport_key_id = kwargs.get('transport_key_id') payload = pecan.request.body if not payload: raise exception.NoDataToProcess() if validators.secret_too_big(payload): raise exception.LimitExceeded() if self.secret.encrypted_data or self.secret.secret_store_metadata: _secret_already_has_data() project_model = res.get_or_create_project(external_project_id) content_type = pecan.request.content_type content_encoding = pecan.request.headers.get('Content-Encoding') plugin.store_secret( unencrypted_raw=payload, content_type_raw=content_type, content_encoding=content_encoding, secret_model=self.secret, project_model=project_model, transport_key_id=transport_key_id) LOG.info('Updated secret for project: %s', external_project_id) @index.when(method='DELETE') @utils.allow_all_content_types @controllers.handle_exceptions(u._('Secret deletion')) @controllers.enforce_rbac('secret:delete') def on_delete(self, external_project_id, **kwargs): plugin.delete_secret(self.secret, external_project_id) LOG.info('Deleted secret for project: %s', external_project_id) class SecretsController(controllers.ACLMixin): """Handles Secret creation requests.""" def __init__(self): LOG.debug('Creating SecretsController') self.validator = validators.NewSecretValidator() self.secret_repo = repo.get_secret_repository() self.quota_enforcer = quota.QuotaEnforcer('secrets', self.secret_repo) def _is_valid_date_filter(self, date_filter): filters = date_filter.split(',') sorted_filters = dict() try: for filter in filters: if filter.startswith('gt:'): if sorted_filters.get('gt') or sorted_filters.get('gte'): return False sorted_filters['gt'] = timeutils.parse_isotime(filter[3:]) elif filter.startswith('gte:'): if sorted_filters.get('gt') or sorted_filters.get( 'gte') or sorted_filters.get('eq'): return False sorted_filters['gte'] = timeutils.parse_isotime(filter[4:]) elif filter.startswith('lt:'): if sorted_filters.get('lt') or sorted_filters.get('lte'): return False sorted_filters['lt'] = timeutils.parse_isotime(filter[3:]) elif filter.startswith('lte:'): if sorted_filters.get('lt') or sorted_filters.get( 'lte') or sorted_filters.get('eq'): return False sorted_filters['lte'] = timeutils.parse_isotime(filter[4:]) elif sorted_filters.get('eq') or sorted_filters.get( 'gte') or sorted_filters.get('lte'): return False else: sorted_filters['eq'] = timeutils.parse_isotime(filter) except ValueError: return False return True def _is_valid_sorting(self, sorting): allowed_keys = ['algorithm', 'bit_length', 'created', 'expiration', 'mode', 'name', 'secret_type', 'status', 'updated'] allowed_directions = ['asc', 'desc'] sorted_keys = dict() for sort in sorting.split(','): if ':' in sort: try: key, direction = sort.split(':') except ValueError: return False else: key, direction = sort, 'asc' if key not in allowed_keys or direction not in allowed_directions: return False if sorted_keys.get(key): return False else: sorted_keys[key] = direction return True @pecan.expose() def _lookup(self, secret_id, *remainder): # NOTE(jaosorior): It's worth noting that even though this section # actually does a lookup in the database regardless of the RBAC policy # check, the execution only gets here if authentication of the user was # previously successful. if not utils.validate_id_is_uuid(secret_id): _invalid_secret_id()() secret = self.secret_repo.get_secret_by_id( entity_id=secret_id, suppress_exception=True) if not secret: _secret_not_found() return SecretController(secret), remainder @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('Secret(s) retrieval')) @controllers.enforce_rbac('secrets:get') def on_get(self, external_project_id, **kw): def secret_fields(field): return putil.mime_types.augment_fields_with_content_types(field) LOG.debug('Start secrets on_get ' 'for project-ID %s:', external_project_id) name = kw.get('name', '') if name: name = parse.unquote_plus(name) bits = kw.get('bits', 0) try: bits = int(bits) except ValueError: # as per Github issue 171, if bits is invalid then # the default should be used. bits = 0 for date_filter in 'created', 'updated', 'expiration': if kw.get(date_filter) and not self._is_valid_date_filter( kw.get(date_filter)): _bad_query_string_parameters() if kw.get('sort') and not self._is_valid_sorting(kw.get('sort')): _bad_query_string_parameters() ctxt = controllers._get_barbican_context(pecan.request) user_id = None if ctxt: user_id = ctxt.user result = self.secret_repo.get_secret_list( external_project_id, offset_arg=kw.get('offset', 0), limit_arg=kw.get('limit'), name=name, alg=kw.get('alg'), mode=kw.get('mode'), bits=bits, secret_type=kw.get('secret_type'), suppress_exception=True, acl_only=kw.get('acl_only'), user_id=user_id, created=kw.get('created'), updated=kw.get('updated'), expiration=kw.get('expiration'), sort=kw.get('sort') ) secrets, offset, limit, total = result if not secrets: secrets_resp_overall = {'secrets': [], 'total': total} else: secrets_resp = [ hrefs.convert_to_hrefs(secret_fields(s)) for s in secrets ] secrets_resp_overall = hrefs.add_nav_hrefs( 'secrets', offset, limit, total, {'secrets': secrets_resp} ) secrets_resp_overall.update({'total': total}) LOG.info('Retrieved secret list for project: %s', external_project_id) return secrets_resp_overall @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('Secret creation')) @controllers.enforce_rbac('secrets:post') @controllers.enforce_content_types(['application/json']) def on_post(self, external_project_id, **kwargs): LOG.debug('Start on_post for project-ID %s:...', external_project_id) data = api.load_body(pecan.request, validator=self.validator) project = res.get_or_create_project(external_project_id) self.quota_enforcer.enforce(project) transport_key_needed = data.get('transport_key_needed', 'false').lower() == 'true' ctxt = controllers._get_barbican_context(pecan.request) if ctxt: # in authenticated pipleline case, always use auth token user data['creator_id'] = ctxt.user secret_model = models.Secret(data) new_secret, transport_key_model = plugin.store_secret( unencrypted_raw=data.get('payload'), content_type_raw=data.get('payload_content_type', 'application/octet-stream'), content_encoding=data.get('payload_content_encoding'), secret_model=secret_model, project_model=project, transport_key_needed=transport_key_needed, transport_key_id=data.get('transport_key_id')) url = hrefs.convert_secret_to_href(new_secret.id) LOG.debug('URI to secret is %s', url) pecan.response.status = 201 pecan.response.headers['Location'] = url LOG.info('Created a secret for project: %s', external_project_id) if transport_key_model is not None: tkey_url = hrefs.convert_transport_key_to_href( transport_key_model.id) return {'secret_ref': url, 'transport_key_ref': tkey_url} else: return {'secret_ref': url} barbican-6.0.0/barbican/api/controllers/consumers.py0000666000175100017510000002040413245511001022544 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pecan from barbican import api from barbican.api import controllers from barbican.common import exception from barbican.common import hrefs from barbican.common import quota from barbican.common import resources as res from barbican.common import utils from barbican.common import validators from barbican import i18n as u from barbican.model import models from barbican.model import repositories as repo LOG = utils.getLogger(__name__) def _consumer_not_found(): """Throw exception indicating consumer not found.""" pecan.abort(404, u._('Not Found. Sorry but your consumer is in ' 'another castle.')) def _consumer_ownership_mismatch(): """Throw exception indicating the user does not own this consumer.""" pecan.abort(403, u._('Not Allowed. Sorry, only the creator of a consumer ' 'can delete it.')) def _invalid_consumer_id(): """Throw exception indicating consumer id is invalid.""" pecan.abort(404, u._('Not Found. Provided consumer id is invalid.')) class ContainerConsumerController(controllers.ACLMixin): """Handles Consumer entity retrieval and deletion requests.""" def __init__(self, consumer_id): self.consumer_id = consumer_id self.consumer_repo = repo.get_container_consumer_repository() self.validator = validators.ContainerConsumerValidator() @pecan.expose(generic=True) def index(self): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('ContainerConsumer retrieval')) @controllers.enforce_rbac('consumer:get') def on_get(self, external_project_id): consumer = self.consumer_repo.get( entity_id=self.consumer_id, suppress_exception=True) if not consumer: _consumer_not_found() dict_fields = consumer.to_dict_fields() LOG.info('Retrieved a consumer for project: %s', external_project_id) return hrefs.convert_to_hrefs( hrefs.convert_to_hrefs(dict_fields) ) class ContainerConsumersController(controllers.ACLMixin): """Handles Consumer creation requests.""" def __init__(self, container_id): self.container_id = container_id self.consumer_repo = repo.get_container_consumer_repository() self.container_repo = repo.get_container_repository() self.project_repo = repo.get_project_repository() self.validator = validators.ContainerConsumerValidator() self.quota_enforcer = quota.QuotaEnforcer('consumers', self.consumer_repo) @pecan.expose() def _lookup(self, consumer_id, *remainder): if not utils.validate_id_is_uuid(consumer_id): _invalid_consumer_id()() return ContainerConsumerController(consumer_id), remainder @pecan.expose(generic=True) def index(self, **kwargs): pecan.abort(405) # HTTP 405 Method Not Allowed as default @index.when(method='GET', template='json') @controllers.handle_exceptions(u._('ContainerConsumers(s) retrieval')) @controllers.enforce_rbac('consumers:get') def on_get(self, external_project_id, **kw): LOG.debug('Start consumers on_get ' 'for container-ID %s:', self.container_id) result = self.consumer_repo.get_by_container_id( self.container_id, offset_arg=kw.get('offset', 0), limit_arg=kw.get('limit'), suppress_exception=True ) consumers, offset, limit, total = result if not consumers: resp_ctrs_overall = {'consumers': [], 'total': total} else: resp_ctrs = [ hrefs.convert_to_hrefs(c.to_dict_fields()) for c in consumers ] consumer_path = "containers/{container_id}/consumers".format( container_id=self.container_id) resp_ctrs_overall = hrefs.add_nav_hrefs( consumer_path, offset, limit, total, {'consumers': resp_ctrs} ) resp_ctrs_overall.update({'total': total}) LOG.info('Retrieved a consumer list for project: %s', external_project_id) return resp_ctrs_overall @index.when(method='POST', template='json') @controllers.handle_exceptions(u._('ContainerConsumer creation')) @controllers.enforce_rbac('consumers:post') @controllers.enforce_content_types(['application/json']) def on_post(self, external_project_id, **kwargs): project = res.get_or_create_project(external_project_id) data = api.load_body(pecan.request, validator=self.validator) LOG.debug('Start on_post...%s', data) container = self._get_container(self.container_id) self.quota_enforcer.enforce(project) new_consumer = models.ContainerConsumerMetadatum(self.container_id, project.id, data) self.consumer_repo.create_or_update_from(new_consumer, container) url = hrefs.convert_consumer_to_href(new_consumer.container_id) pecan.response.headers['Location'] = url LOG.info('Created a consumer for project: %s', external_project_id) return self._return_container_data(self.container_id) @index.when(method='DELETE', template='json') @controllers.handle_exceptions(u._('ContainerConsumer deletion')) @controllers.enforce_rbac('consumers:delete') @controllers.enforce_content_types(['application/json']) def on_delete(self, external_project_id, **kwargs): data = api.load_body(pecan.request, validator=self.validator) LOG.debug('Start on_delete...%s', data) project = self.project_repo.find_by_external_project_id( external_project_id, suppress_exception=True) if not project: _consumer_not_found() consumer = self.consumer_repo.get_by_values( self.container_id, data["name"], data["URL"], suppress_exception=True ) if not consumer: _consumer_not_found() LOG.debug("Found consumer: %s", consumer) container = self._get_container(self.container_id) owner_of_consumer = consumer.project_id == project.id owner_of_container = container.project.external_id \ == external_project_id if not owner_of_consumer and not owner_of_container: _consumer_ownership_mismatch() try: self.consumer_repo.delete_entity_by_id(consumer.id, external_project_id) except exception.NotFound: LOG.exception('Problem deleting consumer') _consumer_not_found() ret_data = self._return_container_data(self.container_id) LOG.info('Deleted a consumer for project: %s', external_project_id) return ret_data def _get_container(self, container_id): container = self.container_repo.get_container_by_id( container_id, suppress_exception=True) if not container: controllers.containers.container_not_found() return container def _return_container_data(self, container_id): container = self._get_container(container_id) dict_fields = container.to_dict_fields() for secret_ref in dict_fields['secret_refs']: hrefs.convert_to_hrefs(secret_ref) # TODO(john-wood-w) Why two calls to convert_to_hrefs()? return hrefs.convert_to_hrefs( hrefs.convert_to_hrefs(dict_fields) ) barbican-6.0.0/barbican/api/app.wsgi0000666000175100017510000000164013245511001017262 0ustar zuulzuul00000000000000# -*- mode: python -*- # # Copyright 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Use this file for deploying the API under mod_wsgi. See http://pecan.readthedocs.org/en/latest/deployment.html for details. NOTE(mtreinish): This wsgi script is deprecated since the wsgi app is now exposed as an entrypoint via barbican-wsgi-api """ from barbican.api import app application = app.get_api_wsgi_script() barbican-6.0.0/barbican/model/0000775000175100017510000000000013245511177016151 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/model/clean.py0000666000175100017510000003537013245511001017601 0ustar zuulzuul00000000000000# Copyright (c) 2016 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from barbican.common import config from barbican.model import models from barbican.model import repositories as repo from oslo_log import log from oslo_utils import timeutils from sqlalchemy import sql as sa_sql import datetime # Import and configure logging. CONF = config.CONF log.setup(CONF, 'barbican') LOG = log.getLogger(__name__) def cleanup_unassociated_projects(): """Clean up unassociated projects. This looks for projects that have no children entries on the dependent tables and removes them. """ LOG.debug("Cleaning up unassociated projects") session = repo.get_session() project_children_tables = [models.Order, models.KEKDatum, models.Secret, models.ContainerConsumerMetadatum, models.Container, models.PreferredCertificateAuthority, models.CertificateAuthority, models.ProjectCertificateAuthority, models.ProjectQuotas] children_names = map(lambda child: child.__name__, project_children_tables) LOG.debug("Children tables for Project table being checked: %s", str(children_names)) sub_query = session.query(models.Project.id) for model in project_children_tables: sub_query = sub_query.outerjoin(model, models.Project.id == model.project_id) sub_query = sub_query.filter(model.id == None) # nopep8 sub_query = sub_query.subquery() sub_query = sa_sql.select([sub_query]) query = session.query(models.Project) query = query.filter(models.Project.id.in_(sub_query)) delete_count = query.delete(synchronize_session='fetch') LOG.info("Cleaned up %(delete_count)s entries for " "%(project_name)s", {'delete_count': str(delete_count), 'project_name': models.Project.__name__}) return delete_count def cleanup_parent_with_no_child(parent_model, child_model, threshold_date=None): """Clean up soft deletions in parent that do not have references in child. Before running this function, the child table should be cleaned of soft deletions. This function left outer joins the parent and child tables and finds the parent entries that do not have a foreign key reference in the child table. Then the results are filtered by soft deletions and are cleaned up. :param parent_model: table class for parent :param child_model: table class for child which restricts parent deletion :param threshold_date: soft deletions older than this date will be removed :returns: total number of entries removed from database """ LOG.debug("Cleaning soft deletes for %(parent_name)s without " "a child in %(child_name)s" % {'parent_name': parent_model.__name__, 'child_name': child_model.__name__}) session = repo.get_session() sub_query = session.query(parent_model.id) sub_query = sub_query.outerjoin(child_model) sub_query = sub_query.filter(child_model.id == None) # nopep8 sub_query = sub_query.subquery() sub_query = sa_sql.select([sub_query]) query = session.query(parent_model) query = query.filter(parent_model.id.in_(sub_query)) query = query.filter(parent_model.deleted) if threshold_date: query = query.filter(parent_model.deleted_at <= threshold_date) delete_count = query.delete(synchronize_session='fetch') LOG.info("Cleaned up %(delete_count)s entries for %(parent_name)s " "with no children in %(child_name)s", {'delete_count': delete_count, 'parent_name': parent_model.__name__, 'child_name': child_model.__name__}) return delete_count def cleanup_softdeletes(model, threshold_date=None): """Remove soft deletions from a table. :param model: table class to remove soft deletions :param threshold_date: soft deletions older than this date will be removed :returns: total number of entries removed from the database """ LOG.debug("Cleaning soft deletes: %s", model.__name__) session = repo.get_session() query = session.query(model) query = query.filter_by(deleted=True) if threshold_date: query = query.filter(model.deleted_at <= threshold_date) delete_count = query.delete() LOG.info("Cleaned up %(delete_count)s entries for %(model_name)s", {'delete_count': delete_count, 'model_name': model.__name__}) return delete_count def cleanup_all(threshold_date=None): """Clean up the main soft deletable resources. This function contains an order of calls to clean up the soft-deletable resources. :param threshold_date: soft deletions older than this date will be removed :returns: total number of entries removed from the database """ LOG.debug("Cleaning up soft deletions where deletion date" " is older than %s", str(threshold_date)) total = 0 total += cleanup_softdeletes(models.TransportKey, threshold_date=threshold_date) total += cleanup_softdeletes(models.OrderBarbicanMetadatum, threshold_date=threshold_date) total += cleanup_softdeletes(models.OrderRetryTask, threshold_date=threshold_date) total += cleanup_softdeletes(models.OrderPluginMetadatum, threshold_date=threshold_date) total += cleanup_parent_with_no_child(models.Order, models.OrderRetryTask, threshold_date=threshold_date) total += cleanup_softdeletes(models.EncryptedDatum, threshold_date=threshold_date) total += cleanup_softdeletes(models.SecretUserMetadatum, threshold_date=threshold_date) total += cleanup_softdeletes(models.SecretStoreMetadatum, threshold_date=threshold_date) total += cleanup_softdeletes(models.ContainerSecret, threshold_date=threshold_date) total += cleanup_parent_with_no_child(models.Secret, models.Order, threshold_date=threshold_date) total += cleanup_softdeletes(models.ContainerConsumerMetadatum, threshold_date=threshold_date) total += cleanup_parent_with_no_child(models.Container, models.Order, threshold_date=threshold_date) total += cleanup_softdeletes(models.KEKDatum, threshold_date=threshold_date) # TODO(edtubill) Clean up projects that were soft deleted by # the keystone listener LOG.info("Cleaned up %s soft deleted entries", total) return total def _soft_delete_expired_secrets(threshold_date): """Soft delete expired secrets. :param threshold_date: secrets that have expired past this date will be soft deleted :returns: total number of secrets that were soft deleted """ current_time = timeutils.utcnow() session = repo.get_session() query = session.query(models.Secret.id) query = query.filter(~models.Secret.deleted) query = query.filter( models.Secret.expiration <= threshold_date ) update_count = query.update( { models.Secret.deleted: True, models.Secret.deleted_at: current_time }, synchronize_session='fetch') return update_count def _hard_delete_acls_for_soft_deleted_secrets(): """Remove acl entries for secrets that have been soft deleted. Removes entries in SecretACL and SecretACLUser which are for secrets that have been soft deleted. """ session = repo.get_session() acl_user_sub_query = session.query(models.SecretACLUser.id) acl_user_sub_query = acl_user_sub_query.join(models.SecretACL) acl_user_sub_query = acl_user_sub_query.join(models.Secret) acl_user_sub_query = acl_user_sub_query.filter(models.Secret.deleted) acl_user_sub_query = acl_user_sub_query.subquery() acl_user_sub_query = sa_sql.select([acl_user_sub_query]) acl_user_query = session.query(models.SecretACLUser) acl_user_query = acl_user_query.filter( models.SecretACLUser.id.in_(acl_user_sub_query)) acl_total = acl_user_query.delete(synchronize_session='fetch') acl_sub_query = session.query(models.SecretACL.id) acl_sub_query = acl_sub_query.join(models.Secret) acl_sub_query = acl_sub_query.filter(models.Secret.deleted) acl_sub_query = acl_sub_query.subquery() acl_sub_query = sa_sql.select([acl_sub_query]) acl_query = session.query(models.SecretACL) acl_query = acl_query.filter( models.SecretACL.id.in_(acl_sub_query)) acl_total += acl_query.delete(synchronize_session='fetch') return acl_total def _soft_delete_expired_secret_children(threshold_date): """Soft delete the children tables of expired secrets. Soft deletes the children tables and hard deletes the ACL children tables of the expired secrets. :param threshold_date: threshold date for secret expiration :returns: returns a pair for number of soft delete children and deleted ACLs """ current_time = timeutils.utcnow() secret_children = [models.SecretStoreMetadatum, models.SecretUserMetadatum, models.EncryptedDatum, models.ContainerSecret] children_names = map(lambda child: child.__name__, secret_children) LOG.debug("Children tables for Secret table being checked: %s", str(children_names)) session = repo.get_session() update_count = 0 for table in secret_children: # Go through children and soft delete them sub_query = session.query(table.id) sub_query = sub_query.join(models.Secret) sub_query = sub_query.filter( models.Secret.expiration <= threshold_date ) sub_query = sub_query.subquery() sub_query = sa_sql.select([sub_query]) query = session.query(table) query = query.filter(table.id.in_(sub_query)) current_update_count = query.update( { table.deleted: True, table.deleted_at: current_time }, synchronize_session='fetch') update_count += current_update_count session.flush() acl_total = _hard_delete_acls_for_soft_deleted_secrets() return update_count, acl_total def soft_delete_expired_secrets(threshold_date): """Soft deletes secrets that are past expiration date. The expired secrets and its children are marked for deletion. ACLs are soft deleted and then purged from the database. :param threshold_date: secrets that have expired past this date will be soft deleted :returns: the sum of soft deleted entries and hard deleted acl entries """ # Note: sqllite does not support multiple table updates so # several db updates are used instead LOG.debug('Soft deleting expired secrets older than: %s', str(threshold_date)) update_count = _soft_delete_expired_secrets(threshold_date) children_count, acl_total = _soft_delete_expired_secret_children( threshold_date) update_count += children_count LOG.info("Soft deleted %(update_count)s entries due to secret " "expiration and %(acl_total)s secret acl entries " "were removed from the database", {'update_count': update_count, 'acl_total': acl_total}) return update_count + acl_total def clean_command(sql_url, min_num_days, do_clean_unassociated_projects, do_soft_delete_expired_secrets, verbose, log_file): """Clean command to clean up the database. :param sql_url: sql connection string to connect to a database :param min_num_days: clean up soft deletions older than this date :param do_clean_unassociated_projects: If True, clean up unassociated projects :param do_soft_delete_expired_secrets: If True, soft delete secrets that have expired :param verbose: If True, log and print more information :param log_file: If set, override the log_file configured """ if verbose: # The verbose flag prints out log events to the screen, otherwise # the log events will only go to the log file CONF.set_override('debug', True) if log_file: CONF.set_override('log_file', log_file) LOG.info("Cleaning up soft deletions in the barbican database") log.setup(CONF, 'barbican') cleanup_total = 0 current_time = timeutils.utcnow() stop_watch = timeutils.StopWatch() stop_watch.start() try: if sql_url: CONF.set_override('sql_connection', sql_url) repo.setup_database_engine_and_factory() if do_clean_unassociated_projects: cleanup_total += cleanup_unassociated_projects() if do_soft_delete_expired_secrets: cleanup_total += soft_delete_expired_secrets( threshold_date=current_time) threshold_date = None if min_num_days >= 0: threshold_date = current_time - datetime.timedelta( days=min_num_days) else: threshold_date = current_time cleanup_total += cleanup_all(threshold_date=threshold_date) repo.commit() except Exception as ex: LOG.exception('Failed to clean up soft deletions in database.') repo.rollback() cleanup_total = 0 # rollback happened, no entries affected raise ex finally: stop_watch.stop() elapsed_time = stop_watch.elapsed() if verbose: CONF.clear_override('debug') if log_file: CONF.clear_override('log_file') repo.clear() if sql_url: CONF.clear_override('sql_connection') log.setup(CONF, 'barbican') # reset the overrides LOG.info("Cleaning of database affected %s entries", cleanup_total) LOG.info('DB clean up finished in %s seconds', elapsed_time) barbican-6.0.0/barbican/model/models.py0000666000175100017510000014544713245511001020011 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Defines database models for Barbican """ import hashlib from oslo_serialization import jsonutils as json from oslo_utils import timeutils import six import sqlalchemy as sa from sqlalchemy.ext import compiler from sqlalchemy.ext import declarative from sqlalchemy import orm from sqlalchemy.orm import collections as col from sqlalchemy import types as sql_types from barbican.common import exception from barbican.common import utils from barbican import i18n as u BASE = declarative.declarative_base() ERROR_REASON_LENGTH = 255 SUB_STATUS_LENGTH = 36 SUB_STATUS_MESSAGE_LENGTH = 255 # Allowed entity states class States(object): PENDING = 'PENDING' ACTIVE = 'ACTIVE' ERROR = 'ERROR' @classmethod def is_valid(cls, state_to_test): """Tests if a state is a valid one.""" return state_to_test in cls.__dict__ class OrderType(object): KEY = 'key' ASYMMETRIC = 'asymmetric' CERTIFICATE = 'certificate' @classmethod def is_valid(cls, order_type): """Tests if a order type is a valid one.""" return order_type in cls.__dict__ class OrderStatus(object): def __init__(self, id, message): self.id = id self.message = message @compiler.compiles(sa.BigInteger, 'sqlite') def compile_big_int_sqlite(type_, compiler, **kw): return 'INTEGER' class JsonBlob(sql_types.TypeDecorator): """JsonBlob is custom type for fields which need to store JSON text.""" impl = sa.Text def process_bind_param(self, value, dialect): if value is not None: return json.dumps(value) return value def process_result_value(self, value, dialect): if value is not None: return json.loads(value) return value class ModelBase(object): """Base class for Nova and Barbican Models.""" __table_args__ = {'mysql_engine': 'InnoDB'} __table_initialized__ = False __protected_attributes__ = { "created_at", "updated_at", "deleted_at", "deleted"} id = sa.Column(sa.String(36), primary_key=True, default=utils.generate_uuid) created_at = sa.Column(sa.DateTime, default=timeutils.utcnow, nullable=False) updated_at = sa.Column(sa.DateTime, default=timeutils.utcnow, nullable=False, onupdate=timeutils.utcnow) deleted_at = sa.Column(sa.DateTime) deleted = sa.Column(sa.Boolean, nullable=False, default=False) status = sa.Column(sa.String(20), nullable=False, default=States.PENDING) def save(self, session=None): """Save this object.""" # import api here to prevent circular dependency problem import barbican.model.repositories session = session or barbican.model.repositories.get_session() # if model is being created ensure that created/updated are the same if self.id is None: self.created_at = timeutils.utcnow() self.updated_at = self.created_at session.add(self) session.flush() def delete(self, session=None): """Delete this object.""" import barbican.model.repositories session = session or barbican.model.repositories.get_session() self._do_delete_children(session) session.delete(self) def _do_delete_children(self, session): """Sub-class hook: delete children relationships.""" pass def update(self, values): """dict.update() behaviour.""" for k, v in values.items(): self[k] = v def __setitem__(self, key, value): setattr(self, key, value) def __getitem__(self, key): return getattr(self, key) def __iter__(self): self._i = iter(orm.object_mapper(self).sa.Columns) return self def next(self): n = next(self._i).name return n, getattr(self, n) def keys(self): return self.__dict__.keys() def values(self): return self.__dict__.values() def items(self): return self.__dict__.items() def to_dict(self): return self.__dict__.copy() def to_dict_fields(self): """Returns a dictionary of just the db fields of this entity.""" if self.created_at: created_at = self.created_at.isoformat() else: created_at = self.created_at if self.updated_at: updated_at = self.updated_at.isoformat() else: updated_at = self.updated_at dict_fields = { 'created': created_at, 'updated': updated_at, 'status': self.status } if self.deleted_at: dict_fields['deleted_at'] = self.deleted_at.isoformat() if self.deleted: dict_fields['deleted'] = True dict_fields.update(self._do_extra_dict_fields()) return dict_fields def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {} def _iso_to_datetime(self, expiration): """Convert ISO formatted string to datetime.""" if isinstance(expiration, six.string_types): expiration_iso = timeutils.parse_isotime(expiration.strip()) expiration = timeutils.normalize_time(expiration_iso) return expiration class SoftDeleteMixIn(object): """Mix-in class that adds soft delete functionality.""" def delete(self, session=None): """Delete this object.""" import barbican.model.repositories session = session or barbican.model.repositories.get_session() self.deleted = True self.deleted_at = timeutils.utcnow() self.save(session=session) self._do_delete_children(session) class ContainerSecret(BASE, SoftDeleteMixIn, ModelBase): """Represents an association between a Container and a Secret.""" __tablename__ = 'container_secret' name = sa.Column(sa.String(255), nullable=True) container_id = sa.Column( sa.String(36), sa.ForeignKey('containers.id'), index=True, nullable=False) secret_id = sa.Column( sa.String(36), sa.ForeignKey('secrets.id'), index=True, nullable=False) # Eager load this relationship via 'lazy=False'. container = orm.relationship( 'Container', backref=orm.backref('container_secrets', lazy=False, primaryjoin="and_(ContainerSecret.container_id == " "Container.id, ContainerSecret.deleted!=True)")) secrets = orm.relationship( 'Secret', backref=orm.backref('container_secrets', primaryjoin="and_(ContainerSecret.secret_id == " "Secret.id, ContainerSecret.deleted!=True)")) __table_args__ = (sa.UniqueConstraint('container_id', 'secret_id', 'name', name='_container_secret_name_uc'),) class Project(BASE, SoftDeleteMixIn, ModelBase): """Represents a Project in the datastore. Projects are users that wish to store secret information within Barbican. """ __tablename__ = 'projects' external_id = sa.Column(sa.String(255), unique=True) orders = orm.relationship("Order", backref="project") secrets = orm.relationship("Secret", backref="project") keks = orm.relationship("KEKDatum", backref="project") containers = orm.relationship("Container", backref="project") cas = orm.relationship("ProjectCertificateAuthority", backref="project") project_quotas = orm.relationship("ProjectQuotas", backref="project") def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'external_id': self.external_id} class Secret(BASE, SoftDeleteMixIn, ModelBase): """Represents a Secret in the datastore. Secrets are any information Projects wish to store within Barbican, though the actual encrypted data is stored in one or more EncryptedData entities on behalf of a Secret. """ __tablename__ = 'secrets' name = sa.Column(sa.String(255)) secret_type = sa.Column(sa.String(255), server_default=utils.SECRET_TYPE_OPAQUE) expiration = sa.Column(sa.DateTime, default=None) algorithm = sa.Column(sa.String(255)) bit_length = sa.Column(sa.Integer) mode = sa.Column(sa.String(255)) creator_id = sa.Column(sa.String(255)) project_id = sa.Column( sa.String(36), sa.ForeignKey('projects.id', name='secrets_project_fk'), index=True, nullable=False) # TODO(jwood): Performance - Consider avoiding full load of all # datum attributes here. This is only being done to support the # building of the list of supported content types when secret # metadata is retrieved. # See barbican.api.resources.py::SecretsResource.on_get() # Eager load this relationship via 'lazy=False'. encrypted_data = orm.relationship("EncryptedDatum", lazy=False) secret_store_metadata = orm.relationship( "SecretStoreMetadatum", collection_class=col.attribute_mapped_collection('key'), backref="secret", cascade="all, delete-orphan") secret_user_metadata = orm.relationship( "SecretUserMetadatum", collection_class=col.attribute_mapped_collection('key'), backref="secret", cascade="all, delete-orphan") def __init__(self, parsed_request=None): """Creates secret from a dict.""" super(Secret, self).__init__() if parsed_request: self.name = parsed_request.get('name') self.secret_type = parsed_request.get( 'secret_type', utils.SECRET_TYPE_OPAQUE) expiration = self._iso_to_datetime(parsed_request.get ('expiration')) self.expiration = expiration self.algorithm = parsed_request.get('algorithm') self.bit_length = parsed_request.get('bit_length') self.mode = parsed_request.get('mode') self.creator_id = parsed_request.get('creator_id') self.project_id = parsed_request.get('project_id') self.status = States.ACTIVE def _do_delete_children(self, session): """Sub-class hook: delete children relationships.""" for k, v in self.secret_store_metadata.items(): v.delete(session) for k, v in self.secret_user_metadata.items(): v.delete(session) for datum in self.encrypted_data: datum.delete(session) for secret_ref in self.container_secrets: session.delete(secret_ref) for secret_acl in self.secret_acls: session.delete(secret_acl) def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" if self.expiration: expiration = self.expiration.isoformat() else: expiration = self.expiration return { 'secret_id': self.id, 'name': self.name, 'secret_type': self.secret_type, 'expiration': expiration, 'algorithm': self.algorithm, 'bit_length': self.bit_length, 'mode': self.mode, 'creator_id': self.creator_id, } class SecretStoreMetadatum(BASE, SoftDeleteMixIn, ModelBase): """Represents Secret Store metadatum for a single key-value pair.""" __tablename__ = "secret_store_metadata" key = sa.Column(sa.String(255), nullable=False) value = sa.Column(sa.String(255), nullable=False) secret_id = sa.Column( sa.String(36), sa.ForeignKey('secrets.id'), index=True, nullable=False) def __init__(self, key, value): super(SecretStoreMetadatum, self).__init__() msg = u._("Must supply non-None {0} argument " "for SecretStoreMetadatum entry.") if key is None: raise exception.MissingArgumentError(msg.format("key")) self.key = key if value is None: raise exception.MissingArgumentError(msg.format("value")) self.value = value def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return { 'key': self.key, 'value': self.value } class SecretUserMetadatum(BASE, SoftDeleteMixIn, ModelBase): """Represents Secret user metadatum for a single key-value pair.""" __tablename__ = "secret_user_metadata" key = sa.Column(sa.String(255), nullable=False) value = sa.Column(sa.String(255), nullable=False) secret_id = sa.Column( sa.String(36), sa.ForeignKey('secrets.id'), index=True, nullable=False) __table_args__ = (sa.UniqueConstraint('secret_id', 'key', name='_secret_key_uc'),) def __init__(self, key, value): super(SecretUserMetadatum, self).__init__() msg = u._("Must supply non-None {0} argument " "for SecretUserMetadatum entry.") if key is None: raise exception.MissingArgumentError(msg.format("key")) self.key = key if value is None: raise exception.MissingArgumentError(msg.format("value")) self.value = value def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return { 'key': self.key, 'value': self.value } class EncryptedDatum(BASE, SoftDeleteMixIn, ModelBase): """Represents the encrypted data for a Secret.""" __tablename__ = 'encrypted_data' content_type = sa.Column(sa.String(255)) secret_id = sa.Column( sa.String(36), sa.ForeignKey('secrets.id'), index=True, nullable=False) kek_id = sa.Column( sa.String(36), sa.ForeignKey('kek_data.id'), index=True, nullable=False) # TODO(jwood) Why LargeBinary on Postgres (BYTEA) not work correctly? cypher_text = sa.Column(sa.Text) kek_meta_extended = sa.Column(sa.Text) # Eager load this relationship via 'lazy=False'. kek_meta_project = orm.relationship("KEKDatum", lazy=False) def __init__(self, secret=None, kek_datum=None): """Creates encrypted datum from a secret and KEK metadata.""" super(EncryptedDatum, self).__init__() if secret: self.secret_id = secret.id if kek_datum: self.kek_id = kek_datum.id self.kek_meta_project = kek_datum self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'content_type': self.content_type} class KEKDatum(BASE, SoftDeleteMixIn, ModelBase): """Key encryption key (KEK) metadata model. Represents the key encryption key (KEK) metadata associated with a process used to encrypt/decrypt secret information. When a secret is encrypted, in addition to the cypher text, the Barbican encryption process produces a KEK metadata object. The cypher text is stored via the EncryptedDatum model above, whereas the metadata is stored within this model. Decryption processes utilize this KEK metadata to decrypt the associated cypher text. Note that this model is intended to be agnostic to the specific means used to encrypt/decrypt the secret information, so please do not place vendor- specific attributes here. Note as well that each Project will have at most one 'active=True' KEKDatum instance at a time, representing the most recent KEK metadata instance to use for encryption processes performed on behalf of the Project. KEKDatum instances that are 'active=False' are associated to previously used encryption processes for the Project, that eventually should be rotated and deleted with the Project's active KEKDatum. """ __tablename__ = 'kek_data' plugin_name = sa.Column(sa.String(255), nullable=False) kek_label = sa.Column(sa.String(255)) project_id = sa.Column( sa.String(36), sa.ForeignKey('projects.id', name='kek_data_project_fk'), index=True, nullable=False) active = sa.Column(sa.Boolean, nullable=False, default=True) bind_completed = sa.Column(sa.Boolean, nullable=False, default=False) algorithm = sa.Column(sa.String(255)) bit_length = sa.Column(sa.Integer) mode = sa.Column(sa.String(255)) plugin_meta = sa.Column(sa.Text) def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'algorithm': self.algorithm} class Order(BASE, SoftDeleteMixIn, ModelBase): """Represents an Order in the datastore. Orders are requests for Barbican to generate secrets, ranging from symmetric, asymmetric keys to automated requests to Certificate Authorities to generate SSL certificates. """ __tablename__ = 'orders' type = sa.Column(sa.String(255), nullable=False, default='key') project_id = sa.Column( sa.String(36), sa.ForeignKey('projects.id', name='orders_project_fk'), index=True, nullable=False) error_status_code = sa.Column(sa.String(16)) error_reason = sa.Column(sa.String(ERROR_REASON_LENGTH)) meta = sa.Column(JsonBlob(), nullable=True) secret_id = sa.Column(sa.String(36), sa.ForeignKey('secrets.id'), index=True, nullable=True) container_id = sa.Column(sa.String(36), sa.ForeignKey('containers.id'), index=True, nullable=True) sub_status = sa.Column(sa.String(SUB_STATUS_LENGTH), nullable=True) sub_status_message = sa.Column(sa.String(SUB_STATUS_MESSAGE_LENGTH), nullable=True) creator_id = sa.Column(sa.String(255)) order_plugin_metadata = orm.relationship( "OrderPluginMetadatum", collection_class=col.attribute_mapped_collection('key'), backref="order", cascade="all, delete-orphan") order_barbican_metadata = orm.relationship( "OrderBarbicanMetadatum", collection_class=col.attribute_mapped_collection('key'), backref="order", cascade="all, delete-orphan") def __init__(self, parsed_request=None): """Creates a Order entity from a dict.""" super(Order, self).__init__() if parsed_request: self.type = parsed_request.get('type') self.meta = parsed_request.get('meta') self.status = States.ACTIVE self.sub_status = parsed_request.get('sub_status') self.sub_status_message = parsed_request.get( 'sub_status_message') self.creator_id = parsed_request.get('creator_id') def set_error_reason_safely(self, error_reason_raw): """Ensure error reason does not raise database attribute exceptions.""" self.error_reason = error_reason_raw[:ERROR_REASON_LENGTH] def set_sub_status_safely(self, sub_status_raw): """Ensure sub-status does not raise database attribute exceptions.""" self.sub_status = sub_status_raw[:SUB_STATUS_LENGTH] def set_sub_status_message_safely(self, sub_status_message_raw): """Ensure status message doesn't raise database attrib. exceptions.""" self.sub_status_message = sub_status_message_raw[ :SUB_STATUS_MESSAGE_LENGTH ] def _do_delete_children(self, session): """Sub-class hook: delete children relationships.""" for k, v in self.order_plugin_metadata.items(): v.delete(session) for k, v in self.order_barbican_metadata.items(): v.delete(session) def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" ret = { 'type': self.type, 'meta': self.meta, 'order_id': self.id } if self.secret_id: ret['secret_id'] = self.secret_id if self.container_id: ret['container_id'] = self.container_id if self.error_status_code: ret['error_status_code'] = self.error_status_code if self.error_reason: ret['error_reason'] = self.error_reason if self.sub_status: ret['sub_status'] = self.sub_status if self.sub_status_message: ret['sub_status_message'] = self.sub_status_message if self.creator_id: ret['creator_id'] = self.creator_id return ret class OrderPluginMetadatum(BASE, SoftDeleteMixIn, ModelBase): """Represents Order plugin metadatum for a single key-value pair. This entity is used to store plugin-specific metadata on behalf of an Order instance. """ __tablename__ = "order_plugin_metadata" order_id = sa.Column(sa.String(36), sa.ForeignKey('orders.id'), index=True, nullable=False) key = sa.Column(sa.String(255), nullable=False) value = sa.Column(sa.String(255), nullable=False) def __init__(self, key, value): super(OrderPluginMetadatum, self).__init__() msg = u._("Must supply non-None {0} argument " "for OrderPluginMetadatum entry.") if key is None: raise exception.MissingArgumentError(msg.format("key")) self.key = key if value is None: raise exception.MissingArgumentError(msg.format("value")) self.value = value def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'key': self.key, 'value': self.value} class OrderBarbicanMetadatum(BASE, SoftDeleteMixIn, ModelBase): """Represents Order barbican metadatum for a single key-value pair. This entity is used to store barbican-specific metadata on behalf of an Order instance. This is data that is stored by the server to help process the order through its life cycle, but which is not in the original request. """ __tablename__ = "order_barbican_metadata" order_id = sa.Column(sa.String(36), sa.ForeignKey('orders.id'), index=True, nullable=False) key = sa.Column(sa.String(255), nullable=False) value = sa.Column(sa.Text, nullable=False) def __init__(self, key, value): super(OrderBarbicanMetadatum, self).__init__() msg = u._("Must supply non-None {0} argument " "for OrderBarbicanMetadatum entry.") if key is None: raise exception.MissingArgumentError(msg.format("key")) self.key = key if value is None: raise exception.MissingArgumentError(msg.format("value")) self.value = value def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'key': self.key, 'value': self.value} class OrderRetryTask(BASE, SoftDeleteMixIn, ModelBase): __tablename__ = "order_retry_tasks" __table_args__ = {"mysql_engine": "InnoDB"} __table_initialized__ = False id = sa.Column( sa.String(36), primary_key=True, default=utils.generate_uuid, ) order_id = sa.Column( sa.String(36), sa.ForeignKey("orders.id"), index=True, nullable=False, ) retry_task = sa.Column(sa.Text, nullable=False) retry_at = sa.Column(sa.DateTime, default=None, nullable=False) retry_args = sa.Column(JsonBlob(), nullable=False) retry_kwargs = sa.Column(JsonBlob(), nullable=False) retry_count = sa.Column(sa.Integer, nullable=False, default=0) class Container(BASE, SoftDeleteMixIn, ModelBase): """Represents a Container for Secrets in the datastore. Containers store secret references. Containers are owned by Projects. Containers can be generic or have a predefined type. Predefined typed containers allow users to store structured key relationship inside Barbican. """ __tablename__ = 'containers' name = sa.Column(sa.String(255)) type = sa.Column(sa.Enum('generic', 'rsa', 'dsa', 'certificate', name='container_types')) project_id = sa.Column( sa.String(36), sa.ForeignKey('projects.id', name='containers_project_fk'), index=True, nullable=False) consumers = sa.orm.relationship("ContainerConsumerMetadatum") creator_id = sa.Column(sa.String(255)) def __init__(self, parsed_request=None): """Creates a Container entity from a dict.""" super(Container, self).__init__() if parsed_request: self.name = parsed_request.get('name') self.type = parsed_request.get('type') self.status = States.ACTIVE self.creator_id = parsed_request.get('creator_id') secret_refs = parsed_request.get('secret_refs') if secret_refs: for secret_ref in parsed_request.get('secret_refs'): container_secret = ContainerSecret() container_secret.name = secret_ref.get('name') # TODO(hgedikli) move this into a common location # TODO(hgedikli) validate provided url # TODO(hgedikli) parse out secret_id with regex secret_id = secret_ref.get('secret_ref') if secret_id.endswith('/'): secret_id = secret_id.rsplit('/', 2)[1] elif '/' in secret_id: secret_id = secret_id.rsplit('/', 1)[1] else: secret_id = secret_id container_secret.secret_id = secret_id self.container_secrets.append(container_secret) def _do_delete_children(self, session): """Sub-class hook: delete children relationships.""" for container_secret in self.container_secrets: session.delete(container_secret) for container_acl in self.container_acls: session.delete(container_acl) def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'container_id': self.id, 'name': self.name, 'type': self.type, 'creator_id': self.creator_id, 'secret_refs': [ { 'secret_id': container_secret.secret_id, 'name': container_secret.name if hasattr(container_secret, 'name') else None } for container_secret in self.container_secrets], 'consumers': [ { 'name': consumer.name, 'URL': consumer.URL } for consumer in self.consumers if not consumer.deleted ]} class ContainerConsumerMetadatum(BASE, SoftDeleteMixIn, ModelBase): """Stores Consumer Registrations for Containers in the datastore. Services can register interest in Containers. Services will provide a type and a URL for the object that is using the Container. """ __tablename__ = 'container_consumer_metadata' container_id = sa.Column(sa.String(36), sa.ForeignKey('containers.id'), index=True, nullable=False) project_id = sa.Column(sa.String(36), sa.ForeignKey('projects.id'), index=True, nullable=True) name = sa.Column(sa.String(36)) URL = sa.Column(sa.String(255)) data_hash = sa.Column(sa.CHAR(64)) __table_args__ = ( sa.UniqueConstraint('data_hash', name='_consumer_hashed_container_name_url_uc'), sa.Index('values_index', 'container_id', 'name', 'URL') ) def __init__(self, container_id, project_id, parsed_request): """Registers a Consumer to a Container.""" super(ContainerConsumerMetadatum, self).__init__() # TODO(john-wood-w) This class should really be immutable due to the # data_hash attribute. if container_id and parsed_request: self.container_id = container_id self.project_id = project_id self.name = parsed_request.get('name') self.URL = parsed_request.get('URL') hash_text = ''.join((self.container_id, self.name, self.URL)) self.data_hash = hashlib.sha256(hash_text. encode('utf-8')).hexdigest() self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'name': self.name, 'URL': self.URL} class TransportKey(BASE, SoftDeleteMixIn, ModelBase): """Transport Key model for wrapping secrets in transit Represents the transport key used for wrapping secrets in transit to/from clients when storing/retrieving secrets. """ __tablename__ = 'transport_keys' plugin_name = sa.Column(sa.String(255), nullable=False) transport_key = sa.Column(sa.Text, nullable=False) def __init__(self, plugin_name, transport_key): """Creates transport key entity.""" super(TransportKey, self).__init__() msg = u._("Must supply non-None {0} argument for TransportKey entry.") if plugin_name is None: raise exception.MissingArgumentError(msg.format("plugin_name")) self.plugin_name = plugin_name if transport_key is None: raise exception.MissingArgumentError(msg.format("transport_key")) self.transport_key = transport_key self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'transport_key_id': self.id, 'plugin_name': self.plugin_name} class CertificateAuthority(BASE, ModelBase): """CertificateAuthority model to specify the CAs available to Barbican Represents the CAs available for certificate issuance to Barbican. """ __tablename__ = 'certificate_authorities' plugin_name = sa.Column(sa.String(255), nullable=False) plugin_ca_id = sa.Column(sa.Text, nullable=False) expiration = sa.Column(sa.DateTime, default=None) creator_id = sa.Column(sa.String(255), nullable=True) project_id = sa.Column( sa.String(36), sa.ForeignKey('projects.id', name='cas_project_fk'), nullable=True) ca_meta = orm.relationship( 'CertificateAuthorityMetadatum', collection_class=col.attribute_mapped_collection('key'), backref="ca", cascade="all, delete-orphan" ) def __init__(self, parsed_ca_in): """Creates certificate authority entity.""" super(CertificateAuthority, self).__init__() msg = u._("Must supply Non-None {0} argument " "for CertificateAuthority entry.") parsed_ca = dict(parsed_ca_in) plugin_name = parsed_ca.pop('plugin_name', None) if plugin_name is None: raise exception.MissingArgumentError(msg.format("plugin_name")) self.plugin_name = plugin_name plugin_ca_id = parsed_ca.pop('plugin_ca_id', None) if plugin_ca_id is None: raise exception.MissingArgumentError(msg.format("plugin_ca_id")) self.plugin_ca_id = plugin_ca_id expiration = parsed_ca.pop('expiration', None) self.expiration = self._iso_to_datetime(expiration) creator_id = parsed_ca.pop('creator_id', None) if creator_id is not None: self.creator_id = creator_id project_id = parsed_ca.pop('project_id', None) if project_id is not None: self.project_id = project_id for key in parsed_ca: meta = CertificateAuthorityMetadatum(key, parsed_ca[key]) self.ca_meta[key] = meta self.status = States.ACTIVE def _do_delete_children(self, session): """Sub-class hook: delete children relationships.""" for k, v in self.ca_meta.items(): v.delete(session) def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" if self.expiration: expiration = self.expiration.isoformat() else: expiration = None return { 'ca_id': self.id, 'plugin_name': self.plugin_name, 'plugin_ca_id': self.plugin_ca_id, 'expiration': expiration, 'meta': [ { meta['key']: meta['value'] } for key, meta in self.ca_meta.items() ] } class CertificateAuthorityMetadatum(BASE, ModelBase): """Represents CA metadatum for a single key-value pair.""" __tablename__ = "certificate_authority_metadata" key = sa.Column(sa.String(255), index=True, nullable=False) value = sa.Column(sa.Text, nullable=False) ca_id = sa.Column( sa.String(36), sa.ForeignKey('certificate_authorities.id'), index=True, nullable=False) __table_args__ = (sa.UniqueConstraint( 'ca_id', 'key', name='_certificate_authority_metadatum_uc'),) def __init__(self, key, value): super(CertificateAuthorityMetadatum, self).__init__() msg = u._("Must supply non-None {0} argument " "for CertificateAuthorityMetadatum entry.") if key is None: raise exception.MissingArgumentError(msg.format("key")) self.key = key if value is None: raise exception.MissingArgumentError(msg.format("value")) self.value = value def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return { 'key': self.key, 'value': self.value } class ProjectCertificateAuthority(BASE, ModelBase): """Stores CAs available for a project. Admins can define a set of CAs that are available for use in a particular project. There can be multiple entries for any given project. """ __tablename__ = 'project_certificate_authorities' project_id = sa.Column(sa.String(36), sa.ForeignKey('projects.id'), index=True, nullable=False) ca_id = sa.Column(sa.String(36), sa.ForeignKey('certificate_authorities.id'), index=True, nullable=False) ca = orm.relationship("CertificateAuthority", backref="project_cas") __table_args__ = (sa.UniqueConstraint( 'project_id', 'ca_id', name='_project_certificate_authority_uc'),) def __init__(self, project_id, ca_id): """Registers a Consumer to a Container.""" super(ProjectCertificateAuthority, self).__init__() msg = u._("Must supply non-None {0} argument " "for ProjectCertificateAuthority entry.") if project_id is None: raise exception.MissingArgumentError(msg.format("project_id")) self.project_id = project_id if ca_id is None: raise exception.MissingArgumentError(msg.format("ca_id")) self.ca_id = ca_id self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'project_id': self.project_id, 'ca_id': self.ca_id} class PreferredCertificateAuthority(BASE, ModelBase): """Stores preferred CAs for any project. Admins can define a set of CAs available for issuance requests for any project in the ProjectCertificateAuthority table.. """ __tablename__ = 'preferred_certificate_authorities' project_id = sa.Column(sa.String(36), sa.ForeignKey('projects.id'), index=True, unique=True, nullable=False) ca_id = sa.Column(sa.String(36), sa.ForeignKey( 'certificate_authorities.id', name='preferred_certificate_authorities_fk'), index=True, nullable=False) project = orm.relationship('Project', backref=orm.backref('preferred_ca'), uselist=False) ca = orm.relationship('CertificateAuthority', backref=orm.backref('preferred_ca')) def __init__(self, project_id, ca_id): """Registers a Consumer to a Container.""" super(PreferredCertificateAuthority, self).__init__() msg = u._("Must supply non-None {0} argument " "for PreferredCertificateAuthority entry.") if project_id is None: raise exception.MissingArgumentError(msg.format("project_id")) self.project_id = project_id if ca_id is None: raise exception.MissingArgumentError(msg.format("ca_id")) self.ca_id = ca_id self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'project_id': self.project_id, 'ca_id': self.ca_id} class SecretACL(BASE, ModelBase): """Stores Access Control List (ACL) for a secret. Class to define whitelist of user ids who are allowed specific operation on a secret. List of user ids is defined via SecretACLUser via acl_users association. Creator_only flag helps in making a secret private for non-admin project users who may have access otherwise. SecretACL deletes are not soft-deletes. """ __tablename__ = 'secret_acls' secret_id = sa.Column(sa.String(36), sa.ForeignKey('secrets.id'), index=True, nullable=False) operation = sa.Column(sa.String(255), nullable=False) project_access = sa.Column(sa.Boolean, nullable=False, default=True) secret = orm.relationship( 'Secret', backref=orm.backref('secret_acls', lazy=False)) acl_users = orm.relationship( 'SecretACLUser', backref=orm.backref('secret_acl', lazy=False), cascade="all, delete-orphan") __table_args__ = (sa.UniqueConstraint( 'secret_id', 'operation', name='_secret_acl_operation_uc'),) def __init__(self, secret_id, operation, project_access=None, user_ids=None): """Creates secret ACL entity.""" super(SecretACL, self).__init__() msg = u._("Must supply non-None {0} argument for SecretACL entry.") if secret_id is None: raise exception.MissingArgumentError(msg.format("secret_id")) self.secret_id = secret_id if operation is None: raise exception.MissingArgumentError(msg.format("operation")) self.operation = operation if project_access is not None: self.project_access = project_access self.status = States.ACTIVE if user_ids is not None and isinstance(user_ids, list): userids = set(user_ids) # remove duplicate if any for user_id in userids: acl_user = SecretACLUser(self.id, user_id) self.acl_users.append(acl_user) def _do_delete_children(self, session): """Sub-class hook: delete children relationships.""" for acl_user in self.acl_users: acl_user.delete(session) def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields. Adds non-deleted acl related users from relationship if there. """ users = [acl_user.user_id for acl_user in self.acl_users if not acl_user.deleted] fields = {'acl_id': self.id, 'secret_id': self.secret_id, 'operation': self.operation, 'project_access': self.project_access} if users: fields['users'] = users return fields class ContainerACL(BASE, ModelBase): """Stores Access Control List (ACL) for a container. Class to define whitelist of user ids who are allowed specific operation on a container. List of user ids is defined in ContainerACLUser via acl_users association. Creator_only flag helps in making a container private for non-admin project users who may have access otherwise. ContainerACL deletes are not soft-deletes. """ __tablename__ = 'container_acls' container_id = sa.Column(sa.String(36), sa.ForeignKey('containers.id'), index=True, nullable=False) operation = sa.Column(sa.String(255), nullable=False) project_access = sa.Column(sa.Boolean, nullable=False, default=True) container = orm.relationship( 'Container', backref=orm.backref('container_acls', lazy=False)) acl_users = orm.relationship( 'ContainerACLUser', backref=orm.backref('container_acl', lazy=False), cascade="all, delete-orphan") __table_args__ = (sa.UniqueConstraint( 'container_id', 'operation', name='_container_acl_operation_uc'),) def __init__(self, container_id, operation, project_access=None, user_ids=None): """Creates container ACL entity.""" super(ContainerACL, self).__init__() msg = u._("Must supply non-None {0} argument for ContainerACL entry.") if container_id is None: raise exception.MissingArgumentError(msg.format("container_id")) self.container_id = container_id if operation is None: raise exception.MissingArgumentError(msg.format("operation")) self.operation = operation if project_access is not None: self.project_access = project_access self.status = States.ACTIVE if user_ids is not None and isinstance(user_ids, list): userids = set(user_ids) # remove duplicate if any for user_id in userids: acl_user = ContainerACLUser(self.id, user_id) self.acl_users.append(acl_user) def _do_delete_children(self, session): """Sub-class hook: delete children relationships.""" for acl_user in self.acl_users: acl_user.delete(session) def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields. Adds non-deleted acl related users from relationship if there. """ users = [acl_user.user_id for acl_user in self.acl_users if not acl_user.deleted] fields = {'acl_id': self.id, 'container_id': self.container_id, 'operation': self.operation, 'project_access': self.project_access} if users: fields['users'] = users return fields class SecretACLUser(BASE, ModelBase): """Stores user id for a secret ACL. This class provides way to store list of users associated with a specific ACL operation. SecretACLUser deletes are not soft-deletes. """ __tablename__ = 'secret_acl_users' acl_id = sa.Column(sa.String(36), sa.ForeignKey('secret_acls.id'), index=True, nullable=False) user_id = sa.Column(sa.String(255), nullable=False) __table_args__ = (sa.UniqueConstraint( 'acl_id', 'user_id', name='_secret_acl_user_uc'),) def __init__(self, acl_id, user_id): """Creates secret ACL user entity.""" super(SecretACLUser, self).__init__() msg = u._("Must supply non-None {0} argument for SecretACLUser entry.") self.acl_id = acl_id if user_id is None: raise exception.MissingArgumentError(msg.format("user_id")) self.user_id = user_id self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'acl_id': self.acl_id, 'user_id': self.user_id} class ContainerACLUser(BASE, ModelBase): """Stores user id for a container ACL. This class provides way to store list of users associated with a specific ACL operation. ContainerACLUser deletes are not soft-deletes. """ __tablename__ = 'container_acl_users' acl_id = sa.Column(sa.String(36), sa.ForeignKey('container_acls.id'), index=True, nullable=False) user_id = sa.Column(sa.String(255), nullable=False) __table_args__ = (sa.UniqueConstraint( 'acl_id', 'user_id', name='_container_acl_user_uc'),) def __init__(self, acl_id, user_id): """Creates container ACL user entity.""" super(ContainerACLUser, self).__init__() msg = u._("Must supply non-None {0} argument for ContainerACLUser " "entry.") self.acl_id = acl_id if user_id is None: raise exception.MissingArgumentError(msg.format("user_id")) self.user_id = user_id self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'acl_id': self.acl_id, 'user_id': self.user_id} class ProjectQuotas(BASE, ModelBase): """Stores Project Quotas. Class to define project specific resource quotas. Project quota deletes are not soft-deletes. """ __tablename__ = 'project_quotas' project_id = sa.Column( sa.String(36), sa.ForeignKey('projects.id', name='project_quotas_fk'), index=True, nullable=False) secrets = sa.Column(sa.Integer, nullable=True) orders = sa.Column(sa.Integer, nullable=True) containers = sa.Column(sa.Integer, nullable=True) consumers = sa.Column(sa.Integer, nullable=True) cas = sa.Column(sa.Integer, nullable=True) def __init__(self, project_id=None, parsed_project_quotas=None): """Creates Project Quotas entity from a project and a dict. :param project_id: the internal id of the project with quotas :param parsed_project_quotas: a dict with the keys matching the resources for which quotas are to be set, and the values containing the quota value to be set for this project and that resource. :return: None """ super(ProjectQuotas, self).__init__() msg = u._("Must supply non-None {0} argument for ProjectQuotas entry.") if project_id is None: raise exception.MissingArgumentError(msg.format("project_id")) self.project_id = project_id if parsed_project_quotas is None: self.secrets = None self.orders = None self.containers = None self.consumers = None self.cas = None else: self.secrets = parsed_project_quotas.get('secrets') self.orders = parsed_project_quotas.get('orders') self.containers = parsed_project_quotas.get('containers') self.consumers = parsed_project_quotas.get('consumers') self.cas = parsed_project_quotas.get('cas') def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" ret = { 'project_id': self.project_id, } if self.secrets: ret['secrets'] = self.secrets if self.orders: ret['orders'] = self.orders if self.containers: ret['containers'] = self.containers if self.consumers: ret['consumers'] = self.consumers if self.cas: ret['cas'] = self.cas return ret class SecretStores(BASE, ModelBase): """List of secret stores defined via service configuration. This class provides a list of secret stores entities with their respective secret store plugin and crypto plugin names. SecretStores deletes are NOT soft-deletes. """ __tablename__ = 'secret_stores' store_plugin = sa.Column(sa.String(255), nullable=False) crypto_plugin = sa.Column(sa.String(255), nullable=True) global_default = sa.Column(sa.Boolean, nullable=False, default=False) name = sa.Column(sa.String(255), nullable=False) __table_args__ = (sa.UniqueConstraint( 'store_plugin', 'crypto_plugin', name='_secret_stores_plugin_names_uc'), sa.UniqueConstraint('name', name='_secret_stores_name_uc'),) def __init__(self, name, store_plugin, crypto_plugin=None, global_default=None): """Creates secret store entity.""" super(SecretStores, self).__init__() msg = u._("Must supply non-Blank {0} argument for SecretStores entry.") if not name: raise exception.MissingArgumentError(msg.format("name")) if not store_plugin: raise exception.MissingArgumentError(msg.format("store_plugin")) self.store_plugin = store_plugin self.name = name self.crypto_plugin = crypto_plugin if global_default is not None: self.global_default = global_default self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'secret_store_id': self.id, 'store_plugin': self.store_plugin, 'crypto_plugin': self.crypto_plugin, 'global_default': self.global_default, 'name': self.name} class ProjectSecretStore(BASE, ModelBase): """Stores secret store to be used for new project secrets. This class maintains secret store and project mapping so that new project secret entries uses it as plugin backend. ProjectSecretStores deletes are NOT soft-deletes. """ __tablename__ = 'project_secret_store' secret_store_id = sa.Column(sa.String(36), sa.ForeignKey('secret_stores.id'), index=True, nullable=False) project_id = sa.Column(sa.String(36), sa.ForeignKey('projects.id'), index=True, nullable=False) secret_store = orm.relationship("SecretStores", backref="project_store") project = orm.relationship('Project', backref=orm.backref('preferred_secret_store')) __table_args__ = (sa.UniqueConstraint( 'project_id', name='_project_secret_store_project_uc'),) def __init__(self, project_id, secret_store_id): """Creates project secret store mapping entity.""" super(ProjectSecretStore, self).__init__() msg = u._("Must supply non-None {0} argument for ProjectSecretStore " " entry.") if not project_id: raise exception.MissingArgumentError(msg.format("project_id")) self.project_id = project_id if not secret_store_id: raise exception.MissingArgumentError(msg.format("secret_store_id")) self.secret_store_id = secret_store_id self.status = States.ACTIVE def _do_extra_dict_fields(self): """Sub-class hook method: return dict of fields.""" return {'secret_store_id': self.secret_store_id, 'project_id': self.project_id} barbican-6.0.0/barbican/model/sync.py0000666000175100017510000000404313245511001017464 0ustar zuulzuul00000000000000# Copyright (c) 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from barbican.common import config from barbican.model import repositories as repo from oslo_log import log # Import and configure logging. CONF = config.CONF log.setup(CONF, 'barbican') LOG = log.getLogger(__name__) def sync_secret_stores(sql_url, verbose, log_file): """Command to sync secret stores table with config . :param sql_url: sql connection string to connect to a database :param verbose: If True, log and print more information :param log_file: If set, override the log_file configured """ if verbose: # The verbose flag prints out log events to the screen, otherwise # the log events will only go to the log file CONF.set_override('debug', True) if log_file: CONF.set_override('log_file', log_file) LOG.info("Syncing the secret_stores table with barbican.conf") log.setup(CONF, 'barbican') try: if sql_url: CONF.set_override('sql_connection', sql_url) repo.setup_database_engine_and_factory( initialize_secret_stores=True) repo.commit() except Exception as ex: LOG.exception('Failed to sync secret_stores table.') repo.rollback() raise ex finally: if verbose: CONF.clear_override('debug') if log_file: CONF.clear_override('log_file') repo.clear() if sql_url: CONF.clear_override('sql_connection') log.setup(CONF, 'barbican') # reset the overrides barbican-6.0.0/barbican/model/__init__.py0000666000175100017510000000000013245511001020234 0ustar zuulzuul00000000000000barbican-6.0.0/barbican/model/repositories.py0000666000175100017510000026343213245511001021250 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Defines interface for DB access that Resource controllers may reference TODO: The top part of this file was 'borrowed' from Glance, but seems quite intense for sqlalchemy, and maybe could be simplified. """ import logging import re import sys import time from oslo_db import exception as db_exc from oslo_db.sqlalchemy import session from oslo_utils import timeutils from oslo_utils import uuidutils import sqlalchemy from sqlalchemy import func as sa_func from sqlalchemy import or_ import sqlalchemy.orm as sa_orm from barbican.common import config from barbican.common import exception from barbican.common import utils from barbican import i18n as u from barbican.model.migration import commands from barbican.model import models LOG = utils.getLogger(__name__) _ENGINE = None _SESSION_FACTORY = None BASE = models.BASE sa_logger = None # Singleton repository references, instantiated via get_xxxx_repository() # functions below. Please keep this list in alphabetical order. _CA_REPOSITORY = None _CONTAINER_ACL_REPOSITORY = None _CONTAINER_CONSUMER_REPOSITORY = None _CONTAINER_REPOSITORY = None _CONTAINER_SECRET_REPOSITORY = None _ENCRYPTED_DATUM_REPOSITORY = None _KEK_DATUM_REPOSITORY = None _ORDER_PLUGIN_META_REPOSITORY = None _ORDER_BARBICAN_META_REPOSITORY = None _ORDER_REPOSITORY = None _ORDER_RETRY_TASK_REPOSITORY = None _PREFERRED_CA_REPOSITORY = None _PROJECT_REPOSITORY = None _PROJECT_CA_REPOSITORY = None _PROJECT_QUOTAS_REPOSITORY = None _SECRET_ACL_REPOSITORY = None _SECRET_META_REPOSITORY = None _SECRET_USER_META_REPOSITORY = None _SECRET_REPOSITORY = None _TRANSPORT_KEY_REPOSITORY = None _SECRET_STORES_REPOSITORY = None _PROJECT_SECRET_STORE_REPOSITORY = None CONF = config.CONF def hard_reset(): """Performs a hard reset of database resources, used for unit testing.""" # TODO(jvrbanac): Remove this as soon as we improve our unit testing # to not require this. global _ENGINE, _SESSION_FACTORY if _ENGINE: _ENGINE.dispose() _ENGINE = None _SESSION_FACTORY = None # Make sure we reinitialize the engine and session factory setup_database_engine_and_factory() def setup_database_engine_and_factory(initialize_secret_stores=False): global sa_logger, _SESSION_FACTORY, _ENGINE LOG.info('Setting up database engine and session factory') if CONF.debug: sa_logger = logging.getLogger('sqlalchemy.engine') sa_logger.setLevel(logging.DEBUG) if CONF.sql_pool_logging: pool_logger = logging.getLogger('sqlalchemy.pool') pool_logger.setLevel(logging.DEBUG) _ENGINE = _get_engine(_ENGINE) # Utilize SQLAlchemy's scoped_session to ensure that we only have one # session instance per thread. session_maker = sa_orm.sessionmaker(bind=_ENGINE) _SESSION_FACTORY = sqlalchemy.orm.scoped_session(session_maker) if initialize_secret_stores: _initialize_secret_stores_data() def start(): """Start for read-write requests placeholder Typically performed at the start of a request cycle, say for POST or PUT requests. """ pass def start_read_only(): """Start for read-only requests placeholder Typically performed at the start of a request cycle, say for GET or HEAD requests. """ pass def commit(): """Commit session state so far to the database. Typically performed at the end of a request cycle. """ get_session().commit() def rollback(): """Rollback session state so far. Typically performed when the request cycle raises an Exception. """ get_session().rollback() def clear(): """Dispose of this session, releases db resources. Typically performed at the end of a request cycle, after a commit() or rollback(). """ if _SESSION_FACTORY: # not initialized in some unit test _SESSION_FACTORY.remove() def get_session(): """Helper method to grab session.""" return _SESSION_FACTORY() def _get_engine(engine): if not engine: connection = CONF.sql_connection if not connection: raise exception.BarbicanException( u._('No SQL connection configured')) # TODO(jfwood): # connection_dict = sqlalchemy.engine.url.make_url(_CONNECTION) engine_args = { 'idle_timeout': CONF.sql_idle_timeout} if CONF.sql_pool_size: engine_args['max_pool_size'] = CONF.sql_pool_size if CONF.sql_pool_max_overflow: engine_args['max_overflow'] = CONF.sql_pool_max_overflow db_connection = None try: engine = _create_engine(connection, **engine_args) db_connection = engine.connect() except Exception as err: msg = u._("Error configuring registry database with supplied " "sql_connection. Got error: {error}").format(error=err) LOG.exception(msg) raise exception.BarbicanException(msg) finally: if db_connection: db_connection.close() if CONF.db_auto_create: meta = sqlalchemy.MetaData() meta.reflect(bind=engine) tables = meta.tables _auto_generate_tables(engine, tables) else: LOG.info('Not auto-creating barbican registry DB') return engine def _initialize_secret_stores_data(): """Initializes secret stores data in database. This logic is executed only when database engine and factory is built. Secret store get_manager internally reads secret store plugin configuration from service configuration and saves it in secret_stores table in database. """ if utils.is_multiple_backends_enabled(): from barbican.plugin.interface import secret_store secret_store.get_manager() def is_db_connection_error(args): """Return True if error in connecting to db.""" # NOTE(adam_g): This is currently MySQL specific and needs to be extended # to support Postgres and others. conn_err_codes = ('2002', '2003', '2006') for err_code in conn_err_codes: if args.find(err_code) != -1: return True return False def _create_engine(connection, **engine_args): LOG.debug('Sql connection: please check "sql_connection" property in ' 'barbican configuration file; Args: %s', engine_args) engine = session.create_engine(connection, **engine_args) # TODO(jfwood): if 'mysql' in connection_dict.drivername: # TODO(jfwood): sqlalchemy.event.listen(_ENGINE, 'checkout', # TODO(jfwood): ping_listener) # Wrap the engine's connect method with a retry decorator. engine.connect = wrap_db_error(engine.connect) return engine def _auto_generate_tables(engine, tables): if tables and 'alembic_version' in tables: # Upgrade the database to the latest version. LOG.info('Updating schema to latest version') commands.upgrade() else: # Create database tables from our models. LOG.info('Auto-creating barbican registry DB') models.BASE.metadata.create_all(engine) # Sync the alembic version 'head' with current models. commands.stamp() def wrap_db_error(f): """Retry DB connection. Copied from nova and modified.""" def _wrap(*args, **kwargs): try: return f(*args, **kwargs) except sqlalchemy.exc.OperationalError as e: if not is_db_connection_error(e.args[0]): raise remaining_attempts = CONF.sql_max_retries while True: LOG.warning('SQL connection failed. %d attempts left.', remaining_attempts) remaining_attempts -= 1 time.sleep(CONF.sql_retry_interval) try: return f(*args, **kwargs) except sqlalchemy.exc.OperationalError as e: if (remaining_attempts <= 0 or not is_db_connection_error(e.args[0])): raise except sqlalchemy.exc.DBAPIError: raise except sqlalchemy.exc.DBAPIError: raise _wrap.__name__ = f.__name__ return _wrap def clean_paging_values(offset_arg=0, limit_arg=CONF.default_limit_paging): """Cleans and safely limits raw paging offset/limit values.""" offset_arg = offset_arg or 0 limit_arg = limit_arg or CONF.default_limit_paging try: offset = int(offset_arg) if offset < 0: offset = 0 if offset > sys.maxsize: offset = 0 except ValueError: offset = 0 try: limit = int(limit_arg) if limit < 1: limit = 1 if limit > CONF.max_limit_paging: limit = CONF.max_limit_paging except ValueError: limit = CONF.default_limit_paging LOG.debug("Clean paging values limit=%(limit)s, offset=%(offset)s" % {'limit': limit, 'offset': offset}) return offset, limit def delete_all_project_resources(project_id): """Logic to cleanup all project resources. This cleanup uses same alchemy session to perform all db operations as a transaction and will commit only when all db operations are performed without error. """ session = get_session() container_repo = get_container_repository() container_repo.delete_project_entities( project_id, suppress_exception=False, session=session) # secret children SecretStoreMetadatum, EncryptedDatum # and container_secrets are deleted as part of secret delete secret_repo = get_secret_repository() secret_repo.delete_project_entities( project_id, suppress_exception=False, session=session) kek_repo = get_kek_datum_repository() kek_repo.delete_project_entities( project_id, suppress_exception=False, session=session) project_repo = get_project_repository() project_repo.delete_project_entities( project_id, suppress_exception=False, session=session) class BaseRepo(object): """Base repository for the barbican entities. This class provides template methods that allow sub-classes to hook specific functionality as needed. Clients access instances of this class via singletons, therefore implementations should be stateless aside from configuration. """ def get_session(self, session=None): LOG.debug("Getting session...") return session or get_session() def get(self, entity_id, external_project_id=None, force_show_deleted=False, suppress_exception=False, session=None): """Get an entity or raise if it does not exist.""" session = self.get_session(session) try: query = self._do_build_get_query(entity_id, external_project_id, session) # filter out deleted entities if requested if not force_show_deleted: query = query.filter_by(deleted=False) entity = query.one() except sa_orm.exc.NoResultFound: LOG.exception("Not found for %s", entity_id) entity = None if not suppress_exception: _raise_entity_not_found(self._do_entity_name(), entity_id) return entity def create_from(self, entity, session=None): """Sub-class hook: create from entity.""" if not entity: msg = u._( "Must supply non-None {entity_name}." ).format(entity_name=self._do_entity_name()) raise exception.Invalid(msg) if entity.id: msg = u._( "Must supply {entity_name} with id=None (i.e. new entity)." ).format(entity_name=self._do_entity_name()) raise exception.Invalid(msg) LOG.debug("Begin create from...") session = self.get_session(session) start = time.time() # DEBUG # Validate the attributes before we go any further. From my # (unknown Glance developer) investigation, the @validates # decorator does not validate # on new records, only on existing records, which is, well, # idiotic. self._do_validate(entity.to_dict()) try: LOG.debug("Saving entity...") entity.save(session=session) except db_exc.DBDuplicateEntry as e: session.rollback() LOG.exception('Problem saving entity for create') error_msg = re.sub('[()]', '', str(e.args)) raise exception.ConstraintCheck(error=error_msg) LOG.debug('Elapsed repo ' 'create secret:%s', (time.time() - start)) # DEBUG return entity def save(self, entity): """Saves the state of the entity.""" entity.updated_at = timeutils.utcnow() # Validate the attributes before we go any further. From my # (unknown Glance developer) investigation, the @validates # decorator does not validate # on new records, only on existing records, which is, well, # idiotic. self._do_validate(entity.to_dict()) entity.save() def delete_entity_by_id(self, entity_id, external_project_id, session=None): """Remove the entity by its ID.""" session = self.get_session(session) entity = self.get(entity_id=entity_id, external_project_id=external_project_id, session=session) entity.delete(session=session) def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "Entity" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return None def _do_convert_values(self, values): """Sub-class hook: convert text-based values to target types This is specifically for database values. """ pass def _do_validate(self, values): """Sub-class hook: validate values. Validates the incoming data and raises an Invalid exception if anything is out of order. :param values: Mapping of entity metadata to check """ status = values.get('status', None) if not status: # TODO(jfwood): I18n this! msg = u._("{entity_name} status is required.").format( entity_name=self._do_entity_name()) raise exception.Invalid(msg) if not models.States.is_valid(status): msg = u._("Invalid status '{status}' for {entity_name}.").format( status=status, entity_name=self._do_entity_name()) raise exception.Invalid(msg) return values def _update_values(self, entity_ref, values): for k in values: if getattr(entity_ref, k) != values[k]: setattr(entity_ref, k, values[k]) def _build_get_project_entities_query(self, project_id, session): """Sub-class hook: build a query to retrieve entities for a project. :param project_id: id of barbican project entity :param session: existing db session reference. :returns: A query object for getting all project related entities This will filter deleted entities if there. """ msg = u._( "{entity_name} is missing query build method for get " "project entities.").format( entity_name=self._do_entity_name()) raise NotImplementedError(msg) def get_project_entities(self, project_id, session=None): """Gets entities associated with a given project. :param project_id: id of barbican project entity :param session: existing db session reference. If None, gets session. :returns: list of matching entities found otherwise returns empty list if no entity exists for a given project. Sub-class should implement `_build_get_project_entities_query` function to delete related entities otherwise it would raise NotImplementedError on its usage. """ session = self.get_session(session) query = self._build_get_project_entities_query(project_id, session) if query: return query.all() else: return [] def get_count(self, project_id, session=None): """Gets count of entities associated with a given project :param project_id: id of barbican project entity :param session: existing db session reference. If None, gets session. :return: an number 0 or greater Sub-class should implement `_build_get_project_entities_query` function to delete related entities otherwise it would raise NotImplementedError on its usage. """ session = self.get_session(session) query = self._build_get_project_entities_query(project_id, session) if query: return query.count() else: return 0 def delete_project_entities(self, project_id, suppress_exception=False, session=None): """Deletes entities for a given project. :param project_id: id of barbican project entity :param suppress_exception: Pass True if want to suppress exception :param session: existing db session reference. If None, gets session. Sub-class should implement `_build_get_project_entities_query` function to delete related entities otherwise it would raise NotImplementedError on its usage. """ session = self.get_session(session) query = self._build_get_project_entities_query(project_id, session=session) try: # query cannot be None as related repo class is expected to # implement it otherwise error is raised in build query call for entity in query: # Its a soft delete so its more like entity update entity.delete(session=session) except sqlalchemy.exc.SQLAlchemyError: LOG.exception('Problem finding project related entity to delete') if not suppress_exception: raise exception.BarbicanException(u._('Error deleting project ' 'entities for ' 'project_id=%s'), project_id) class ProjectRepo(BaseRepo): """Repository for the Project entity.""" def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "Project" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.Project).filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass def find_by_external_project_id(self, external_project_id, suppress_exception=False, session=None): session = self.get_session(session) try: query = session.query(models.Project) query = query.filter_by(external_id=external_project_id) entity = query.one() except sa_orm.exc.NoResultFound: entity = None if not suppress_exception: LOG.exception("Problem getting Project %s", external_project_id) raise exception.NotFound(u._( "No {entity_name} found with keystone-ID {id}").format( entity_name=self._do_entity_name(), id=external_project_id)) return entity def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving project for given id.""" query = session.query(models.Project) return query.filter_by(id=project_id).filter_by(deleted=False) class SecretRepo(BaseRepo): """Repository for the Secret entity.""" def get_secret_list(self, external_project_id, offset_arg=None, limit_arg=None, name=None, alg=None, mode=None, bits=0, secret_type=None, suppress_exception=False, session=None, acl_only=None, user_id=None, created=None, updated=None, expiration=None, sort=None): """Returns a list of secrets The list is scoped to secrets that are associated with the external_project_id (e.g. Keystone Project ID), and filtered using any provided filters. """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) utcnow = timeutils.utcnow() query = session.query(models.Secret) query = query.filter_by(deleted=False) query = query.filter(or_(models.Secret.expiration.is_(None), models.Secret.expiration > utcnow)) if name: query = query.filter(models.Secret.name.like(name)) if alg: query = query.filter(models.Secret.algorithm.like(alg)) if mode: query = query.filter(models.Secret.mode.like(mode)) if bits > 0: query = query.filter(models.Secret.bit_length == bits) if secret_type: query = query.filter(models.Secret.secret_type == secret_type) if created: query = self._build_date_filter_query(query, 'created_at', created) if updated: query = self._build_date_filter_query(query, 'updated_at', updated) if expiration: query = self._build_date_filter_query( query, 'expiration', expiration ) else: query = query.filter(or_(models.Secret.expiration.is_(None), models.Secret.expiration > utcnow)) if sort: query = self._build_sort_filter_query(query, sort) if acl_only and acl_only.lower() == 'true' and user_id: query = query.join(models.SecretACL) query = query.join(models.SecretACLUser) query = query.filter(models.SecretACLUser.user_id == user_id) else: query = query.join(models.Project) query = query.filter( models.Project.external_id == external_project_id) total = query.count() end_offset = offset + limit LOG.debug('Retrieving from %s to %s', offset, end_offset) query = query.limit(limit).offset(offset) entities = query.all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "Secret" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" utcnow = timeutils.utcnow() expiration_filter = or_(models.Secret.expiration.is_(None), models.Secret.expiration > utcnow) query = session.query(models.Secret) query = query.filter_by(id=entity_id, deleted=False) query = query.filter(expiration_filter) query = query.join(models.Project) query = query.filter(models.Project.external_id == external_project_id) return query def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving Secrets associated with a given project :param project_id: id of barbican project entity :param session: existing db session reference. """ utcnow = timeutils.utcnow() expiration_filter = or_(models.Secret.expiration.is_(None), models.Secret.expiration > utcnow) query = session.query(models.Secret).filter_by(deleted=False) query = query.filter(models.Secret.project_id == project_id) query = query.filter(expiration_filter) return query def _build_date_filter_query(self, query, attribute, date_filters): """Parses date_filters to apply each filter to the given query :param query: query object to apply filters to :param attribute: name of the model attribute to be filtered :param date_filters: comma separated string of date filters to apply """ parse = timeutils.parse_isotime for filter in date_filters.split(','): if filter.startswith('lte:'): isotime = filter[4:] query = query.filter(or_( getattr(models.Secret, attribute) < parse(isotime), getattr(models.Secret, attribute) == parse(isotime)) ) elif filter.startswith('lt:'): isotime = filter[3:] query = query.filter( getattr(models.Secret, attribute) < parse(isotime) ) elif filter.startswith('gte:'): isotime = filter[4:] query = query.filter(or_( getattr(models.Secret, attribute) > parse(isotime), getattr(models.Secret, attribute) == parse(isotime)) ) elif filter.startswith('gt:'): isotime = filter[3:] query = query.filter( getattr(models.Secret, attribute) > parse(isotime) ) else: query = query.filter( getattr(models.Secret, attribute) == parse(filter) ) return query def _build_sort_filter_query(self, query, sort_filters): """Parses sort_filters to order the query""" key_to_column_map = { 'created': 'created_at', 'updated': 'updated_at' } ordering = list() for sort in sort_filters.split(','): if ':' in sort: key, direction = sort.split(':') else: key, direction = sort, 'asc' ordering.append( getattr( getattr(models.Secret, key_to_column_map.get(key, key)), direction )() ) return query.order_by(*ordering) def get_secret_by_id(self, entity_id, suppress_exception=False, session=None): """Gets secret by its entity id without project id check.""" session = self.get_session(session) try: utcnow = timeutils.utcnow() expiration_filter = or_(models.Secret.expiration.is_(None), models.Secret.expiration > utcnow) query = session.query(models.Secret) query = query.filter_by(id=entity_id, deleted=False) query = query.filter(expiration_filter) entity = query.one() except sa_orm.exc.NoResultFound: entity = None if not suppress_exception: LOG.exception("Problem getting secret %s", entity_id) raise exception.NotFound(u._( "No secret found with secret-ID {id}").format( entity_name=self._do_entity_name(), id=entity_id)) return entity class EncryptedDatumRepo(BaseRepo): """Repository for the EncryptedDatum entity Stores encrypted information on behalf of a Secret. """ def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "EncryptedDatum" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.EncryptedDatum).filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class SecretStoreMetadatumRepo(BaseRepo): """Repository for the SecretStoreMetadatum entity Stores key/value information on behalf of a Secret. """ def save(self, metadata, secret_model): """Saves the specified metadata for the secret. :raises NotFound if entity does not exist. """ now = timeutils.utcnow() for k, v in metadata.items(): meta_model = models.SecretStoreMetadatum(k, v) meta_model.updated_at = now meta_model.secret = secret_model meta_model.save() def get_metadata_for_secret(self, secret_id): """Returns a dict of SecretStoreMetadatum instances.""" session = get_session() query = session.query(models.SecretStoreMetadatum) query = query.filter_by(deleted=False) query = query.filter( models.SecretStoreMetadatum.secret_id == secret_id) metadata = query.all() return {m.key: m.value for m in metadata} def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "SecretStoreMetadatum" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.SecretStoreMetadatum) return query.filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class SecretUserMetadatumRepo(BaseRepo): """Repository for the SecretUserMetadatum entity Stores key/value information on behalf of a Secret. """ def create_replace_user_metadata(self, secret_id, metadata): """Creates or replaces the specified metadata for the secret.""" now = timeutils.utcnow() session = get_session() query = session.query(models.SecretUserMetadatum) query = query.filter_by(secret_id=secret_id) query.delete() for k, v in metadata.items(): meta_model = models.SecretUserMetadatum(k, v) meta_model.secret_id = secret_id meta_model.updated_at = now meta_model.save(session=session) def get_metadata_for_secret(self, secret_id): """Returns a dict of SecretUserMetadatum instances.""" session = get_session() query = session.query(models.SecretUserMetadatum) query = query.filter_by(deleted=False) query = query.filter( models.SecretUserMetadatum.secret_id == secret_id) metadata = query.all() return {m.key: m.value for m in metadata} def create_replace_user_metadatum(self, secret_id, key, value): now = timeutils.utcnow() session = get_session() query = session.query(models.SecretUserMetadatum) query = query.filter_by(secret_id=secret_id) query = query.filter_by(key=key) query.delete() meta_model = models.SecretUserMetadatum(key, value) meta_model.secret_id = secret_id meta_model.updated_at = now meta_model.save(session=session) def delete_metadatum(self, secret_id, key): """Removes a key from a SecretUserMetadatum instances.""" session = get_session() query = session.query(models.SecretUserMetadatum) query = query.filter_by(secret_id=secret_id) query = query.filter_by(key=key) query.delete() def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "SecretUserMetadatum" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.SecretUserMetadatum) return query.filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class KEKDatumRepo(BaseRepo): """Repository for the KEKDatum entity Stores key encryption key (KEK) metadata used by crypto plugins to encrypt/decrypt secrets. """ def find_or_create_kek_datum(self, project, plugin_name, suppress_exception=False, session=None): """Find or create a KEK datum instance.""" if not plugin_name: raise exception.BarbicanException( u._('Tried to register crypto plugin with null or empty ' 'name.')) kek_datum = None session = self.get_session(session) # TODO(jfwood): Reverse this...attempt insert first, then get on fail. try: query = session.query(models.KEKDatum) query = query.filter_by(project_id=project.id, plugin_name=plugin_name, active=True, deleted=False) kek_datum = query.one() except sa_orm.exc.NoResultFound: kek_datum = models.KEKDatum() kek_datum.kek_label = "project-{0}-key-{1}".format( project.external_id, uuidutils.generate_uuid()) kek_datum.project_id = project.id kek_datum.plugin_name = plugin_name kek_datum.status = models.States.ACTIVE self.save(kek_datum) return kek_datum def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "KEKDatum" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.KEKDatum).filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving KEK Datum instance(s). The returned KEK Datum instance(s) are related to a given project. :param project_id: id of barbican project entity :param session: existing db session reference. """ return session.query(models.KEKDatum).filter_by( project_id=project_id).filter_by(deleted=False) class OrderRepo(BaseRepo): """Repository for the Order entity.""" def get_by_create_date(self, external_project_id, offset_arg=None, limit_arg=None, meta_arg=None, suppress_exception=False, session=None): """Returns a list of orders The list is ordered by the date they were created at and paged based on the offset and limit fields. :param external_project_id: The keystone id for the project. :param offset_arg: The entity number where the query result should start. :param limit_arg: The maximum amount of entities in the result set. :param meta_arg: Optional meta field used to filter results. :param suppress_exception: Whether NoResultFound exceptions should be suppressed. :param session: SQLAlchemy session object. :returns: Tuple consisting of (list_of_entities, offset, limit, total). """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.Order) query = query.order_by(models.Order.created_at) query = query.filter_by(deleted=False) if meta_arg: query = query.filter(models.Order.meta.contains(meta_arg)) query = query.join(models.Project, models.Order.project) query = query.filter(models.Project.external_id == external_project_id) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total ) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "Order" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.Order) query = query.filter_by(id=entity_id, deleted=False) query = query.join(models.Project, models.Order.project) query = query.filter(models.Project.external_id == external_project_id) return query def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving orders related to given project. :param project_id: id of barbican project entity :param session: existing db session reference. """ return session.query(models.Order).filter_by( project_id=project_id).filter_by(deleted=False) class OrderPluginMetadatumRepo(BaseRepo): """Repository for the OrderPluginMetadatum entity Stores key/value plugin information on behalf of an Order. """ def save(self, metadata, order_model): """Saves the specified metadata for the order. :raises NotFound if entity does not exist. """ now = timeutils.utcnow() session = get_session() for k, v in metadata.items(): meta_model = models.OrderPluginMetadatum(k, v) meta_model.updated_at = now meta_model.order = order_model meta_model.save(session=session) def get_metadata_for_order(self, order_id): """Returns a dict of OrderPluginMetadatum instances.""" session = get_session() try: query = session.query(models.OrderPluginMetadatum) query = query.filter_by(deleted=False) query = query.filter( models.OrderPluginMetadatum.order_id == order_id) metadata = query.all() except sa_orm.exc.NoResultFound: metadata = {} return {m.key: m.value for m in metadata} def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "OrderPluginMetadatum" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.OrderPluginMetadatum) return query.filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class OrderBarbicanMetadatumRepo(BaseRepo): """Repository for the OrderBarbicanMetadatum entity Stores key/value plugin information on behalf of a Order. """ def save(self, metadata, order_model): """Saves the specified metadata for the order. :raises NotFound if entity does not exist. """ now = timeutils.utcnow() session = get_session() for k, v in metadata.items(): meta_model = models.OrderBarbicanMetadatum(k, v) meta_model.updated_at = now meta_model.order = order_model meta_model.save(session=session) def get_metadata_for_order(self, order_id): """Returns a dict of OrderBarbicanMetadatum instances.""" session = get_session() try: query = session.query(models.OrderBarbicanMetadatum) query = query.filter_by(deleted=False) query = query.filter( models.OrderBarbicanMetadatum.order_id == order_id) metadata = query.all() except sa_orm.exc.NoResultFound: metadata = {} return {m.key: m.value for m in metadata} def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "OrderBarbicanMetadatum" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.OrderBarbicanMetadatum) return query.filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class OrderRetryTaskRepo(BaseRepo): """Repository for the OrderRetryTask entity.""" def get_by_create_date( self, only_at_or_before_this_date=None, offset_arg=None, limit_arg=None, suppress_exception=False, session=None): """Returns a list of order retry task entities The list is ordered by the date they were created at and paged based on the offset and limit fields. :param only_at_or_before_this_date: If specified, only entities at or before this date are returned. :param offset_arg: The entity number where the query result should start. :param limit_arg: The maximum amount of entities in the result set. :param suppress_exception: Whether NoResultFound exceptions should be suppressed. :param session: SQLAlchemy session object. :returns: Tuple consisting of (list_of_entities, offset, limit, total). """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.OrderRetryTask) query = query.order_by(models.OrderRetryTask.created_at) query = query.filter_by(deleted=False) if only_at_or_before_this_date: query = query.filter( models.OrderRetryTask.retry_at <= only_at_or_before_this_date) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total ) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "OrderRetryTask" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.OrderRetryTask) query = query.filter_by(id=entity_id, deleted=False) return query def _do_validate(self, values): """Sub-class hook: validate values.""" pass class ContainerRepo(BaseRepo): """Repository for the Container entity.""" def get_by_create_date(self, external_project_id, offset_arg=None, limit_arg=None, name_arg=None, suppress_exception=False, session=None): """Returns a list of containers The list is ordered by the date they were created at and paged based on the offset and limit fields. The external_project_id is external-to-Barbican value assigned to the project by Keystone. """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.Container) query = query.order_by(models.Container.created_at) query = query.filter_by(deleted=False) if name_arg: query = query.filter(models.Container.name.like(name_arg)) query = query.join(models.Project, models.Container.project) query = query.filter(models.Project.external_id == external_project_id) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total ) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "Container" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.Container) query = query.filter_by(id=entity_id, deleted=False) query = query.join(models.Project, models.Container.project) query = query.filter(models.Project.external_id == external_project_id) return query def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving container related to given project. :param project_id: id of barbican project entity :param session: existing db session reference. """ return session.query(models.Container).filter_by( deleted=False).filter_by(project_id=project_id) def get_container_by_id(self, entity_id, suppress_exception=False, session=None): """Gets container by its entity id without project id check.""" session = self.get_session(session) try: query = session.query(models.Container) query = query.filter_by(id=entity_id, deleted=False) entity = query.one() except sa_orm.exc.NoResultFound: entity = None if not suppress_exception: LOG.exception("Problem getting container %s", entity_id) raise exception.NotFound(u._( "No container found with container-ID {id}").format( entity_name=self._do_entity_name(), id=entity_id)) return entity class ContainerSecretRepo(BaseRepo): """Repository for the ContainerSecret entity.""" def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "ContainerSecret" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.ContainerSecret ).filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class ContainerConsumerRepo(BaseRepo): """Repository for the Service entity.""" def get_by_container_id(self, container_id, offset_arg=None, limit_arg=None, suppress_exception=False, session=None): """Returns a list of Consumers The list is ordered by the date they were created at and paged based on the offset and limit fields. """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.ContainerConsumerMetadatum) query = query.order_by(models.ContainerConsumerMetadatum.name) query = query.filter_by(deleted=False) query = query.filter( models.ContainerConsumerMetadatum.container_id == container_id ) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total ) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def get_by_values(self, container_id, name, URL, suppress_exception=False, show_deleted=False, session=None): session = self.get_session(session) try: query = session.query(models.ContainerConsumerMetadatum) query = query.filter_by( container_id=container_id, name=name, URL=URL) if not show_deleted: query.filter_by(deleted=False) consumer = query.one() except sa_orm.exc.NoResultFound: consumer = None if not suppress_exception: raise exception.NotFound( u._("Could not find {entity_name}").format( entity_name=self._do_entity_name())) return consumer def create_or_update_from(self, new_consumer, container, session=None): session = self.get_session(session) try: container.updated_at = timeutils.utcnow() container.consumers.append(new_consumer) container.save(session=session) except db_exc.DBDuplicateEntry: session.rollback() # We know consumer already exists. # This operation is idempotent, so log this and move on LOG.debug("Consumer %s with URL %s already exists for " "container %s, continuing...", new_consumer.name, new_consumer.URL, new_consumer.container_id) # Get the existing entry and reuse it by clearing the deleted flags existing_consumer = self.get_by_values( new_consumer.container_id, new_consumer.name, new_consumer.URL, show_deleted=True) existing_consumer.deleted = False existing_consumer.deleted_at = None # We are not concerned about timing here -- set only, no reads existing_consumer.save() def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "ContainerConsumer" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.ContainerConsumerMetadatum) return query.filter_by(id=entity_id, deleted=False) def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving consumers associated with given project :param project_id: id of barbican project entity :param session: existing db session reference. """ query = session.query( models.ContainerConsumerMetadatum).filter_by(deleted=False) query = query.filter( models.ContainerConsumerMetadatum.project_id == project_id) return query class TransportKeyRepo(BaseRepo): """Repository for the TransportKey entity Stores transport keys for wrapping the secret data to/from a barbican client. """ def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "TransportKey" def get_by_create_date(self, plugin_name=None, offset_arg=None, limit_arg=None, suppress_exception=False, session=None): """Returns a list of transport keys The list is ordered from latest created first. The search accepts plugin_id as an optional parameter for the search. """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.TransportKey) query = query.order_by(models.TransportKey.created_at) if plugin_name is not None: query = session.query(models.TransportKey) query = query.filter_by(deleted=False, plugin_name=plugin_name) else: query = query.filter_by(deleted=False) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number of entities retrieved: %s out of %s', len(entities), total) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def get_latest_transport_key(self, plugin_name, suppress_exception=False, session=None): """Returns the latest transport key for a given plugin.""" entity, offset, limit, total = self.get_by_create_date( plugin_name, offset_arg=0, limit_arg=1, suppress_exception=suppress_exception, session=session) return entity def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.TransportKey).filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class CertificateAuthorityRepo(BaseRepo): """Repository for the CertificateAuthority entity. CertificateAuthority entries are not soft delete. So there is no need to have deleted=False filter in queries. """ def get_by_create_date(self, offset_arg=None, limit_arg=None, plugin_name=None, plugin_ca_id=None, suppress_exception=False, session=None, show_expired=False, project_id=None, restrict_to_project_cas=False): """Returns a list of certificate authorities The returned certificate authorities are ordered by the date they were created and paged based on the offset and limit fields. """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) if restrict_to_project_cas: # get both subCAs which have been defined for your project # (cas for which the ca.project_id == project_id) AND # project_cas which are defined for your project # (pca.project_id = project_id) query1 = session.query(models.CertificateAuthority) query1 = query1.filter( models.CertificateAuthority.project_id == project_id) query2 = session.query(models.CertificateAuthority) query2 = query2.join(models.ProjectCertificateAuthority) query2 = query2.filter( models.ProjectCertificateAuthority.project_id == project_id) query = query1.union(query2) else: # get both subcas that have been defined for your project # (cas for which ca.project_id == project_id) AND # all top-level CAs (ca.project_id == None) query = session.query(models.CertificateAuthority) query = query.filter(or_( models.CertificateAuthority.project_id == project_id, models.CertificateAuthority.project_id.is_(None) )) query = query.order_by(models.CertificateAuthority.created_at) query = query.filter_by(deleted=False) if not show_expired: utcnow = timeutils.utcnow() query = query.filter(or_( models.CertificateAuthority.expiration.is_(None), models.CertificateAuthority.expiration > utcnow)) if plugin_name: query = query.filter( models.CertificateAuthority.plugin_name.like(plugin_name)) if plugin_ca_id: query = query.filter( models.CertificateAuthority.plugin_ca_id.like(plugin_ca_id)) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total ) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def update_entity(self, old_ca, parsed_ca_in, session=None): """Updates CA entry and its sub-entries.""" parsed_ca = dict(parsed_ca_in) # these fields cannot be modified parsed_ca.pop('plugin_name', None) parsed_ca.pop('plugin_ca_id', None) expiration = parsed_ca.pop('expiration', None) expiration_iso = timeutils.parse_isotime(expiration.strip()) new_expiration = timeutils.normalize_time(expiration_iso) session = self.get_session(session) query = session.query(models.CertificateAuthority).filter_by( id=old_ca.id, deleted=False) entity = query.one() entity.expiration = new_expiration for k, v in entity.ca_meta.items(): if k not in parsed_ca.keys(): v.delete(session) for key in parsed_ca: if key not in entity.ca_meta.keys(): meta = models.CertificateAuthorityMetadatum( key, parsed_ca[key]) entity.ca_meta[key] = meta else: entity.ca_meta[key].value = parsed_ca[key] entity.save() return entity def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "CertificateAuthority" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" utcnow = timeutils.utcnow() # TODO(jfwood): Performance? Is the many-to-many join needed? expiration_filter = or_( models.CertificateAuthority.expiration.is_(None), models.CertificateAuthority.expiration > utcnow) query = session.query(models.CertificateAuthority) query = query.filter_by(id=entity_id, deleted=False) query = query.filter(expiration_filter) return query def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving CA related to given project. :param project_id: id of barbican project entity :param session: existing db session reference. """ return session.query(models.CertificateAuthority).filter_by( project_id=project_id).filter_by(deleted=False) class CertificateAuthorityMetadatumRepo(BaseRepo): """Repository for the CertificateAuthorityMetadatum entity Stores key/value information on behalf of a CA. """ def save(self, metadata, ca_model): """Saves the specified metadata for the CA. :raises NotFound if entity does not exist. """ now = timeutils.utcnow() session = get_session() for k, v in metadata.items(): meta_model = models.CertificateAuthorityMetadatum(k, v) meta_model.updated_at = now meta_model.ca = ca_model meta_model.save(session=session) def get_metadata_for_certificate_authority(self, ca_id): """Returns a dict of CertificateAuthorityMetadatum instances.""" session = get_session() try: query = session.query(models.CertificateAuthorityMetadatum) query = query.filter_by(deleted=False) query = query.filter( models.CertificateAuthorityMetadatum.ca_id == ca_id) metadata = query.all() except sa_orm.exc.NoResultFound: metadata = dict() return {(m.key, m.value) for m in metadata} def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "CertificateAuthorityMetadatum" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.CertificateAuthorityMetadatum) return query.filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class ProjectCertificateAuthorityRepo(BaseRepo): """Repository for the ProjectCertificateAuthority entity. ProjectCertificateAuthority entries are not soft delete. So there is no need to have deleted=False filter in queries. """ def get_by_create_date(self, offset_arg=None, limit_arg=None, project_id=None, ca_id=None, suppress_exception=False, session=None): """Returns a list of project CAs The returned project are ordered by the date they were created and paged based on the offset and limit fields. """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.ProjectCertificateAuthority) query = query.order_by(models.ProjectCertificateAuthority.created_at) query = query.filter_by(deleted=False) if project_id: query = query.filter( models.ProjectCertificateAuthority.project_id.like(project_id)) if ca_id: query = query.filter( models.ProjectCertificateAuthority.ca_id.like(ca_id)) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total ) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "ProjectCertificateAuthority" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.ProjectCertificateAuthority).filter_by( id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving CA related to given project. :param project_id: id of barbican project entity :param session: existing db session reference. """ return session.query(models.ProjectCertificateAuthority).filter_by( project_id=project_id) class PreferredCertificateAuthorityRepo(BaseRepo): """Repository for the PreferredCertificateAuthority entity. PreferredCertificateAuthority entries are not soft delete. So there is no need to have deleted=False filter in queries. """ def get_by_create_date(self, offset_arg=None, limit_arg=None, project_id=None, ca_id=None, suppress_exception=False, session=None): """Returns a list of preferred CAs The returned CAs are ordered by the date they were created and paged based on the offset and limit fields. """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.PreferredCertificateAuthority) query = query.order_by(models.PreferredCertificateAuthority.created_at) if project_id: query = query.filter( models.PreferredCertificateAuthority.project_id.like( project_id)) if ca_id: query = query.filter( models.PreferredCertificateAuthority.ca_id.like(ca_id)) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total ) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def create_or_update_by_project_id(self, project_id, ca_id, session=None): """Create or update preferred CA for a project by project_id. :param project_id: ID of project whose preferred CA will be saved :param ca_id: ID of preferred CA :param session: SQLAlchemy session object. :return: None """ session = self.get_session(session) query = session.query(models.PreferredCertificateAuthority) query = query.filter_by(project_id=project_id) try: entity = query.one() except sa_orm.exc.NoResultFound: self.create_from( models.PreferredCertificateAuthority(project_id, ca_id), session=session) else: entity.ca_id = ca_id entity.save(session) def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "PreferredCertificateAuthority" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.PreferredCertificateAuthority).filter_by( id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for retrieving preferred CA related to given project. :param project_id: id of barbican project entity :param session: existing db session reference. """ return session.query(models.PreferredCertificateAuthority).filter_by( project_id=project_id) class SecretACLRepo(BaseRepo): """Repository for the SecretACL entity. There is no need for SecretACLUserRepo as none of logic access SecretACLUser (ACL user data) directly. Its always derived from SecretACL relationship. SecretACL and SecretACLUser data is not soft delete. So there is no need to have deleted=False filter in queries. """ def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "SecretACL" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.SecretACL) query = query.filter_by(id=entity_id) return query def _do_validate(self, values): """Sub-class hook: validate values.""" pass def get_by_secret_id(self, secret_id, session=None): """Return list of secret ACLs by secret id.""" session = self.get_session(session) query = session.query(models.SecretACL) query = query.filter_by(secret_id=secret_id) return query.all() def create_or_replace_from(self, secret, secret_acl, user_ids=None, session=None): session = self.get_session(session) secret.updated_at = timeutils.utcnow() secret_acl.updated_at = timeutils.utcnow() secret.secret_acls.append(secret_acl) secret.save(session=session) self._create_or_replace_acl_users(secret_acl, user_ids, session=session) def _create_or_replace_acl_users(self, secret_acl, user_ids, session=None): """Creates or updates secret acl user based on input user_ids list. user_ids is expected to be list of ids (enforced by schema validation). Input user ids should have complete list of acl users. It does not apply partial update of user ids. If user_ids is None, no change is made in acl user data. If user_ids list is not None, then following change is made. For existing acl users, just update timestamp if user_id is present in input user ids list. Otherwise, remove existing acl user entries. Then add the remaining input user ids as new acl user db entries. """ if user_ids is None: return user_ids = set(user_ids) now = timeutils.utcnow() session = self.get_session(session) secret_acl.updated_at = now for acl_user in secret_acl.acl_users: if acl_user.user_id in user_ids: # input user_id already exists acl_user.updated_at = now user_ids.remove(acl_user.user_id) else: acl_user.delete(session) for user_id in user_ids: acl_user = models.SecretACLUser(secret_acl.id, user_id) secret_acl.acl_users.append(acl_user) secret_acl.save(session=session) def get_count(self, secret_id, session=None): """Gets count of existing secret ACL(s) for a given secret.""" session = self.get_session(session) query = session.query(sa_func.count(models.SecretACL.id)) query = query.filter(models.SecretACL.secret_id == secret_id) return query.scalar() def delete_acls_for_secret(self, secret, session=None): session = self.get_session(session) for entity in secret.secret_acls: entity.delete(session=session) class ContainerACLRepo(BaseRepo): """Repository for the ContainerACL entity. There is no need for ContainerACLUserRepo as none of logic access ContainerACLUser (ACL user data) directly. Its always derived from ContainerACL relationship. ContainerACL and ContainerACLUser data is not soft delete. So there is no need to have deleted=False filter in queries. """ def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "ContainerACL" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" query = session.query(models.ContainerACL) query = query.filter_by(id=entity_id) return query def _do_validate(self, values): """Sub-class hook: validate values.""" pass def get_by_container_id(self, container_id, session=None): """Return list of container ACLs by container id.""" session = self.get_session(session) query = session.query(models.ContainerACL) query = query.filter_by(container_id=container_id) return query.all() def create_or_replace_from(self, container, container_acl, user_ids=None, session=None): session = self.get_session(session) container.updated_at = timeutils.utcnow() container_acl.updated_at = timeutils.utcnow() container.container_acls.append(container_acl) container.save(session=session) self._create_or_replace_acl_users(container_acl, user_ids, session) def _create_or_replace_acl_users(self, container_acl, user_ids, session=None): """Creates or updates container acl user based on input user_ids list. user_ids is expected to be list of ids (enforced by schema validation). Input user ids should have complete list of acl users. It does not apply partial update of user ids. If user_ids is None, no change is made in acl user data. If user_ids list is not None, then following change is made. For existing acl users, just update timestamp if user_id is present in input user ids list. Otherwise, remove existing acl user entries. Then add the remaining input user ids as new acl user db entries. """ if user_ids is None: return user_ids = set(user_ids) now = timeutils.utcnow() session = self.get_session(session) container_acl.updated_at = now for acl_user in container_acl.acl_users: if acl_user.user_id in user_ids: # input user_id already exists acl_user.updated_at = now user_ids.remove(acl_user.user_id) else: acl_user.delete(session) for user_id in user_ids: acl_user = models.ContainerACLUser(container_acl.id, user_id) container_acl.acl_users.append(acl_user) container_acl.save(session=session) def get_count(self, container_id, session=None): """Gets count of existing container ACL(s) for a given container.""" session = self.get_session(session) query = session.query(sa_func.count(models.ContainerACL.id)) query = query.filter(models.ContainerACL.container_id == container_id) return query.scalar() def delete_acls_for_container(self, container, session=None): session = self.get_session(session) for entity in container.container_acls: entity.delete(session=session) class ProjectQuotasRepo(BaseRepo): """Repository for the ProjectQuotas entity.""" def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "ProjectQuotas" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.ProjectQuotas).filter_by(id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass def get_by_create_date(self, offset_arg=None, limit_arg=None, suppress_exception=False, session=None): """Returns a list of ProjectQuotas The list is ordered by the date they were created at and paged based on the offset and limit fields. :param offset_arg: The entity number where the query result should start. :param limit_arg: The maximum amount of entities in the result set. :param suppress_exception: Whether NoResultFound exceptions should be suppressed. :param session: SQLAlchemy session object. :raises NotFound: if no quota config is found for the project :returns: Tuple consisting of (list_of_entities, offset, limit, total). """ offset, limit = clean_paging_values(offset_arg, limit_arg) session = self.get_session(session) query = session.query(models.ProjectQuotas) query = query.order_by(models.ProjectQuotas.created_at) query = query.join(models.Project, models.ProjectQuotas.project) start = offset end = offset + limit LOG.debug('Retrieving from %s to %s', start, end) total = query.count() entities = query.offset(start).limit(limit).all() LOG.debug('Number entities retrieved: %s out of %s', len(entities), total) if total <= 0 and not suppress_exception: _raise_no_entities_found(self._do_entity_name()) return entities, offset, limit, total def create_or_update_by_project_id(self, project_id, parsed_project_quotas, session=None): """Create or update Project Quotas config for a project by project_id. :param project_id: ID of project whose quota config will be saved :param parsed_project_quotas: Python dict with quota definition :param session: SQLAlchemy session object. :return: None """ session = self.get_session(session) query = session.query(models.ProjectQuotas) query = query.filter_by(project_id=project_id) try: entity = query.one() except sa_orm.exc.NoResultFound: self.create_from( models.ProjectQuotas(project_id, parsed_project_quotas), session=session) else: self._update_values(entity, parsed_project_quotas) entity.save(session) def get_by_external_project_id(self, external_project_id, suppress_exception=False, session=None): """Return configured Project Quotas for a project by project_id. :param external_project_id: external ID of project to get quotas for :param suppress_exception: when True, NotFound is not raised :param session: SQLAlchemy session object. :raises NotFound: if no quota config is found for the project :return: None or Python dict of project quotas for project """ session = self.get_session(session) query = session.query(models.ProjectQuotas) query = query.join(models.Project, models.ProjectQuotas.project) query = query.filter(models.Project.external_id == external_project_id) try: entity = query.one() except sa_orm.exc.NoResultFound: if suppress_exception: return None else: _raise_no_entities_found(self._do_entity_name()) return entity def delete_by_external_project_id(self, external_project_id, suppress_exception=False, session=None): """Remove configured Project Quotas for a project by project_id. :param external_project_id: external ID of project to delete quotas :param suppress_exception: when True, NotFound is not raised :param session: SQLAlchemy session object. :raises NotFound: if no quota config is found for the project :return: None """ session = self.get_session(session) query = session.query(models.ProjectQuotas) query = query.join(models.Project, models.ProjectQuotas.project) query = query.filter(models.Project.external_id == external_project_id) try: entity = query.one() except sa_orm.exc.NoResultFound: if suppress_exception: return else: _raise_no_entities_found(self._do_entity_name()) entity.delete(session=session) class SecretStoresRepo(BaseRepo): """Repository for the SecretStores entity. SecretStores entries are not soft delete. So there is no need to have deleted=False filter in queries. """ def get_all(self, session=None): """Get list of available secret stores. Status value is not used while getting complete list as we will just maintain ACTIVE ones. No other state is used and needed here. :param session: SQLAlchemy session object. :return: None """ session = self.get_session(session) query = session.query(models.SecretStores) query.order_by(models.SecretStores.created_at.asc()) return query.all() def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "SecretStores" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.SecretStores).filter_by( id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass class ProjectSecretStoreRepo(BaseRepo): """Repository for the ProjectSecretStore entity. ProjectSecretStore entries are not soft delete. So there is no need to have deleted=False filter in queries. """ def get_secret_store_for_project(self, project_id, external_project_id, suppress_exception=False, session=None): """Returns preferred secret store for a project if set. :param project_id: ID of project whose preferred secret store is set :param external_project_id: external ID of project whose preferred secret store is set :param suppress_exception: when True, NotFound is not raised :param session: SQLAlchemy session object. Will return preferred secret store by external project id if provided otherwise uses barbican project identifier to lookup. Throws exception in case no preferred secret store is defined and supporess_exception=False. If suppress_exception is True, then returns None for no preferred secret store for a project found. """ session = self.get_session(session) if external_project_id is None: query = session.query(models.ProjectSecretStore).filter_by( project_id=project_id) else: query = session.query(models.ProjectSecretStore) query = query.join(models.Project, models.ProjectSecretStore.project) query = query.filter(models.Project.external_id == external_project_id) try: entity = query.one() except sa_orm.exc.NoResultFound: LOG.info("No preferred secret store found for project = %s", project_id) entity = None if not suppress_exception: _raise_entity_not_found(self._do_entity_name(), project_id) return entity def create_or_update_for_project(self, project_id, secret_store_id, session=None): """Create or update preferred secret store for a project. :param project_id: ID of project whose preferred secret store is set :param secret_store_id: ID of secret store :param session: SQLAlchemy session object. :return: None If preferred secret store is not set for given project, then create new preferred secret store setting for that project. If secret store setting for project is already there, then it updates with given secret store id. """ session = self.get_session(session) try: entity = self.get_secret_store_for_project(project_id, None, session=session) except exception.NotFound: entity = self.create_from( models.ProjectSecretStore(project_id, secret_store_id), session=session) else: entity.secret_store_id = secret_store_id entity.save(session) return entity def get_count_by_secret_store(self, secret_store_id, session=None): """Gets count of projects mapped to a given secret store. :param secret_store_id: id of secret stores entity :param session: existing db session reference. If None, gets session. :return: an number 0 or greater This method is supposed to provide count of projects which are currently set to use input secret store as their preferred store. This is used when existing secret store configuration is removed and validation is done to make sure that there are no projects using it as preferred secret store. """ session = self.get_session(session) query = session.query(models.ProjectSecretStore).filter_by( secret_store_id=secret_store_id) return query.count() def _do_entity_name(self): """Sub-class hook: return entity name, such as for debugging.""" return "ProjectSecretStore" def _do_build_get_query(self, entity_id, external_project_id, session): """Sub-class hook: build a retrieve query.""" return session.query(models.ProjectSecretStore).filter_by( id=entity_id) def _do_validate(self, values): """Sub-class hook: validate values.""" pass def _build_get_project_entities_query(self, project_id, session): """Builds query for getting preferred secret stores list for a project. :param project_id: id of barbican project entity :param session: existing db session reference. """ return session.query(models.ProjectSecretStore).filter_by( project_id=project_id) def get_ca_repository(): """Returns a singleton Secret repository instance.""" global _CA_REPOSITORY return _get_repository(_CA_REPOSITORY, CertificateAuthorityRepo) def get_container_acl_repository(): """Returns a singleton Container ACL repository instance.""" global _CONTAINER_ACL_REPOSITORY return _get_repository(_CONTAINER_ACL_REPOSITORY, ContainerACLRepo) def get_container_consumer_repository(): """Returns a singleton Container Consumer repository instance.""" global _CONTAINER_CONSUMER_REPOSITORY return _get_repository(_CONTAINER_CONSUMER_REPOSITORY, ContainerConsumerRepo) def get_container_repository(): """Returns a singleton Container repository instance.""" global _CONTAINER_REPOSITORY return _get_repository(_CONTAINER_REPOSITORY, ContainerRepo) def get_container_secret_repository(): """Returns a singleton Container-Secret repository instance.""" global _CONTAINER_SECRET_REPOSITORY return _get_repository(_CONTAINER_SECRET_REPOSITORY, ContainerSecretRepo) def get_encrypted_datum_repository(): """Returns a singleton Encrypted Datum repository instance.""" global _ENCRYPTED_DATUM_REPOSITORY return _get_repository(_ENCRYPTED_DATUM_REPOSITORY, EncryptedDatumRepo) def get_kek_datum_repository(): """Returns a singleton KEK Datum repository instance.""" global _KEK_DATUM_REPOSITORY return _get_repository(_KEK_DATUM_REPOSITORY, KEKDatumRepo) def get_order_plugin_meta_repository(): """Returns a singleton Order-Plugin meta repository instance.""" global _ORDER_PLUGIN_META_REPOSITORY return _get_repository(_ORDER_PLUGIN_META_REPOSITORY, OrderPluginMetadatumRepo) def get_order_barbican_meta_repository(): """Returns a singleton Order-Barbican meta repository instance.""" global _ORDER_BARBICAN_META_REPOSITORY return _get_repository(_ORDER_BARBICAN_META_REPOSITORY, OrderBarbicanMetadatumRepo) def get_order_repository(): """Returns a singleton Order repository instance.""" global _ORDER_REPOSITORY return _get_repository(_ORDER_REPOSITORY, OrderRepo) def get_order_retry_tasks_repository(): """Returns a singleton OrderRetryTask repository instance.""" global _ORDER_RETRY_TASK_REPOSITORY return _get_repository(_ORDER_RETRY_TASK_REPOSITORY, OrderRetryTaskRepo) def get_preferred_ca_repository(): """Returns a singleton Secret repository instance.""" global _PREFERRED_CA_REPOSITORY return _get_repository(_PREFERRED_CA_REPOSITORY, PreferredCertificateAuthorityRepo) def get_project_repository(): """Returns a singleton Project repository instance.""" global _PROJECT_REPOSITORY return _get_repository(_PROJECT_REPOSITORY, ProjectRepo) def get_project_ca_repository(): """Returns a singleton Secret repository instance.""" global _PROJECT_CA_REPOSITORY return _get_repository(_PROJECT_CA_REPOSITORY, ProjectCertificateAuthorityRepo) def get_project_quotas_repository(): """Returns a singleton Project Quotas repository instance.""" global _PROJECT_QUOTAS_REPOSITORY return _get_repository(_PROJECT_QUOTAS_REPOSITORY, ProjectQuotasRepo) def get_secret_acl_repository(): """Returns a singleton Secret ACL repository instance.""" global _SECRET_ACL_REPOSITORY return _get_repository(_SECRET_ACL_REPOSITORY, SecretACLRepo) def get_secret_meta_repository(): """Returns a singleton Secret meta repository instance.""" global _SECRET_META_REPOSITORY return _get_repository(_SECRET_META_REPOSITORY, SecretStoreMetadatumRepo) def get_secret_user_meta_repository(): """Returns a singleton Secret user meta repository instance.""" global _SECRET_USER_META_REPOSITORY return _get_repository(_SECRET_USER_META_REPOSITORY, SecretUserMetadatumRepo) def get_secret_repository(): """Returns a singleton Secret repository instance.""" global _SECRET_REPOSITORY return _get_repository(_SECRET_REPOSITORY, SecretRepo) def get_transport_key_repository(): """Returns a singleton Transport Key repository instance.""" global _TRANSPORT_KEY_REPOSITORY return _get_repository(_TRANSPORT_KEY_REPOSITORY, TransportKeyRepo) def get_secret_stores_repository(): """Returns a singleton Secret Stores repository instance.""" global _SECRET_STORES_REPOSITORY return _get_repository(_SECRET_STORES_REPOSITORY, SecretStoresRepo) def get_project_secret_store_repository(): """Returns a singleton Project Secret Store repository instance.""" global _PROJECT_SECRET_STORE_REPOSITORY return _get_repository(_PROJECT_SECRET_STORE_REPOSITORY, ProjectSecretStoreRepo) def _get_repository(global_ref, repo_class): if not global_ref: global_ref = repo_class() return global_ref def _raise_entity_not_found(entity_name, entity_id): raise exception.NotFound(u._("No {entity} found with ID {id}").format( entity=entity_name, id=entity_id)) def _raise_entity_id_not_found(entity_id): raise exception.NotFound(u._("Entity ID {entity_id} not " "found").format(entity_id=entity_id)) def _raise_no_entities_found(entity_name): raise exception.NotFound( u._("No entities of type {entity_name} found").format( entity_name=entity_name)) barbican-6.0.0/barbican/model/migration/0000775000175100017510000000000013245511177020142 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/0000775000175100017510000000000013245511177023772 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/0000775000175100017510000000000013245511177025642 5ustar zuulzuul00000000000000././@LongLink0000000000000000000000000000017400000000000011217 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/3041b53b95d7_remove_size_limits_on_meta_table_values.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/3041b53b95d7_remove_size_limits_0000666000175100017510000000204713245511001033357 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Remove size limits on meta table values Revision ID: 3041b53b95d7 Revises: 1a7cf79559e3 Create Date: 2015-04-08 15:43:32.852529 """ # revision identifiers, used by Alembic. revision = '3041b53b95d7' down_revision = '1a7cf79559e3' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column( 'order_barbican_metadata', 'value', type_=sa.Text() ) op.alter_column( 'certificate_authority_metadata', 'value', type_=sa.Text() ) ././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/30dba269cc64_update_order_retry_tasks_table.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/30dba269cc64_update_order_retry_0000666000175100017510000000341013245511001033502 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Update order_retry_tasks table Revision ID: 30dba269cc64 Revises: 3041b53b95d7 Create Date: 2015-04-01 17:53:25.447919 """ # revision identifiers, used by Alembic. revision = '30dba269cc64' down_revision = '3041b53b95d7' from oslo_utils import timeutils from alembic import op from barbican.model import models as m import sqlalchemy as sa def upgrade(): op.add_column( 'order_retry_tasks', sa.Column( 'created_at', sa.DateTime(), nullable=False, server_default=str(timeutils.utcnow()))) op.add_column( 'order_retry_tasks', sa.Column( 'deleted', sa.Boolean(), nullable=False, server_default='0')) op.add_column( 'order_retry_tasks', sa.Column('deleted_at', sa.DateTime(), nullable=True)) op.add_column( 'order_retry_tasks', sa.Column( 'status', sa.String(length=20), nullable=False, server_default=m.States.PENDING)) op.add_column( 'order_retry_tasks', sa.Column( 'updated_at', sa.DateTime(), nullable=False, server_default=str(timeutils.utcnow()))) ././@LongLink0000000000000000000000000000017200000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/4070806f6972_add_orders_plugin_metadata_table_and_.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/4070806f6972_add_orders_plugin_m0000666000175100017510000000342613245511001033164 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add orders plugin metadata table and relationships Revision ID: 4070806f6972 Revises: 47b69e523451 Create Date: 2014-08-21 14:06:48.237701 """ # revision identifiers, used by Alembic. revision = '4070806f6972' down_revision = '47b69e523451' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'order_plugin_metadata') if not table_exists: op.create_table( 'order_plugin_metadata', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('order_id', sa.String(length=36), nullable=False), sa.Column('key', sa.String(length=255), nullable=False), sa.Column('value', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['order_id'], ['orders.id'],), sa.PrimaryKeyConstraint('id'), ) ././@LongLink0000000000000000000000000000017100000000000011214 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/443d6f4a69ac_added_secret_type_column_to_secrets_.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/443d6f4a69ac_added_secret_type_c0000666000175100017510000000173213245511001033427 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """added secret type column to secrets table Revision ID: 443d6f4a69ac Revises: aa2cf96a1d5 Create Date: 2015-02-16 12:35:12.876413 """ # revision identifiers, used by Alembic. revision = '443d6f4a69ac' down_revision = 'aa2cf96a1d5' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('secrets', sa.Column('secret_type', sa.String(length=255), nullable=False, server_default="opaque")) ././@LongLink0000000000000000000000000000017000000000000011213 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1a7cf79559e3_new_secret_and_container_acl_tables.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1a7cf79559e3_new_secret_and_cont0000666000175100017510000001321213245511001033407 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """New secret and container ACL tables Revision ID: 1a7cf79559e3 Revises: 1c0f328bfce0 Create Date: 2015-04-01 13:31:04.292754 """ # revision identifiers, used by Alembic. revision = '1a7cf79559e3' down_revision = '1c0f328bfce0' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'secret_acls') if not table_exists: op.create_table( 'secret_acls', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('secret_id', sa.String(length=36), nullable=False), sa.Column('operation', sa.String(length=255), nullable=False), sa.Column('creator_only', sa.Boolean(), nullable=False), sa.ForeignKeyConstraint(['secret_id'], ['secrets.id'],), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('secret_id', 'operation', name='_secret_acl_operation_uc') ) op.create_index(op.f('ix_secret_acls_secret_id'), 'secret_acls', ['secret_id'], unique=False) table_exists = ctx.dialect.has_table(con.engine, 'container_acls') if not table_exists: op.create_table( 'container_acls', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('container_id', sa.String(length=36), nullable=False), sa.Column('operation', sa.String(length=255), nullable=False), sa.Column('creator_only', sa.Boolean(), nullable=False), sa.ForeignKeyConstraint(['container_id'], ['containers.id'],), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('container_id', 'operation', name='_container_acl_operation_uc') ) op.create_index(op.f('ix_container_acls_container_id'), 'container_acls', ['container_id'], unique=False) table_exists = ctx.dialect.has_table(con.engine, 'secret_acl_users') if not table_exists: op.create_table( 'secret_acl_users', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('acl_id', sa.String(length=36), nullable=False), sa.Column('user_id', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['acl_id'], ['secret_acls.id'],), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('acl_id', 'user_id', name='_secret_acl_user_uc') ) op.create_index(op.f('ix_secret_acl_users_acl_id'), 'secret_acl_users', ['acl_id'], unique=False) table_exists = ctx.dialect.has_table(con.engine, 'container_acl_users') if not table_exists: op.create_table( 'container_acl_users', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('acl_id', sa.String(length=36), nullable=False), sa.Column('user_id', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['acl_id'], ['container_acls.id'],), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('acl_id', 'user_id', name='_container_acl_user_uc') ) op.create_index(op.f('ix_container_acl_users_acl_id'), 'container_acl_users', ['acl_id'], unique=False) op.add_column(u'containers', sa.Column('creator_id', sa.String(length=255), nullable=True)) op.add_column(u'orders', sa.Column('creator_id', sa.String(length=255), nullable=True)) op.add_column(u'secrets', sa.Column('creator_id', sa.String(length=255), nullable=True)) ././@LongLink0000000000000000000000000000017200000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1e86c18af2dd_add_new_columns_type_meta_containerid.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1e86c18af2dd_add_new_columns_typ0000666000175100017510000000224313245511001033567 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add new columns type meta containerId Revision ID: 1e86c18af2dd Revises: 13d127569afa Create Date: 2014-06-04 09:53:27.116054 """ # revision identifiers, used by Alembic. revision = '1e86c18af2dd' down_revision = '13d127569afa' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('orders', sa.Column('container_id', sa.String(length=36), nullable=True)) op.add_column('orders', sa.Column('meta', sa.Text, nullable=True)) op.add_column('orders', sa.Column('type', sa.String(length=255), nullable=True)) ././@LongLink0000000000000000000000000000016700000000000011221 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/39cf2e645cba_model_for_multiple_backend_support.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/39cf2e645cba_model_for_multiple_0000666000175100017510000000632613245511001033560 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Model for multiple backend support Revision ID: 39cf2e645cba Revises: d2780d5aa510 Create Date: 2016-07-29 16:45:22.953811 """ # revision identifiers, used by Alembic. revision = '39cf2e645cba' down_revision = 'd2780d5aa510' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'secret_stores') if not table_exists: op.create_table( 'secret_stores', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('store_plugin', sa.String(length=255), nullable=False), sa.Column('crypto_plugin', sa.String(length=255), nullable=True), sa.Column('global_default', sa.Boolean(), nullable=False, default=False), sa.Column('name', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('store_plugin', 'crypto_plugin', name='_secret_stores_plugin_names_uc'), sa.UniqueConstraint('name', name='_secret_stores_name_uc') ) table_exists = ctx.dialect.has_table(con.engine, 'project_secret_store') if not table_exists: op.create_table( 'project_secret_store', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('project_id', sa.String(length=36), nullable=False), sa.Column('secret_store_id', sa.String(length=36), nullable=False), sa.ForeignKeyConstraint(['project_id'], ['projects.id'],), sa.ForeignKeyConstraint( ['secret_store_id'], ['secret_stores.id'],), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('project_id', name='_project_secret_store_project_uc') ) op.create_index(op.f('ix_project_secret_store_project_id'), 'project_secret_store', ['project_id'], unique=True) ././@LongLink0000000000000000000000000000017300000000000011216 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/256da65e0c5f_change_keystone_id_for_external_id_in_.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/256da65e0c5f_change_keystone_id_0000666000175100017510000000174613245511001033441 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Change keystone_id for external_id in Project model Revision ID: 256da65e0c5f Revises: 795737bb3c3 Create Date: 2014-12-22 03:55:29.072375 """ # revision identifiers, used by Alembic. revision = '256da65e0c5f' down_revision = '795737bb3c3' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column('projects', 'keystone_id', type_=sa.String(36), new_column_name='external_id') ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/d2780d5aa510_change_url_length.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/d2780d5aa510_change_url_length.p0000666000175100017510000000166013245511001033267 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """change_url_length Revision ID: d2780d5aa510 Revises: dce488646127 Create Date: 2016-03-11 09:39:32.593231 """ # revision identifiers, used by Alembic. revision = 'd2780d5aa510' down_revision = 'dce488646127' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column( 'container_consumer_metadata', 'URL', type_=sa.String(length=255) ) ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1bece815014f_remove_projectsecret_table.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1bece815014f_remove_projectsecre0000666000175100017510000000152013245511001033504 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """remove ProjectSecret table Revision ID: 1bece815014f Revises: 161f8aceb687 Create Date: 2015-06-23 16:17:50.805295 """ # revision identifiers, used by Alembic. revision = '1bece815014f' down_revision = '161f8aceb687' from alembic import op def upgrade(): op.drop_table('project_secret') ././@LongLink0000000000000000000000000000017300000000000011216 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/4ecde3a3a72a_add_cas_column_to_project_quotas_table.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/4ecde3a3a72a_add_cas_column_to_p0000666000175100017510000000166413245511001033562 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add cas column to project quotas table Revision ID: 4ecde3a3a72a Revises: 10220ccbe7fa Create Date: 2015-09-09 09:40:08.540064 """ # revision identifiers, used by Alembic. revision = '4ecde3a3a72a' down_revision = '10220ccbe7fa' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'project_quotas', sa.Column('cas', sa.Integer(), nullable=True)) ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/dce488646127_add_secret_user_metadata.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/dce488646127_add_secret_user_met0000666000175100017510000000366313245511001033332 0ustar zuulzuul00000000000000# Copyright (c) 2015 IBM # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add-secret-user-metadata Revision ID: dce488646127 Revises: 39a96e67e990 Create Date: 2016-02-09 04:52:03.975486 """ # revision identifiers, used by Alembic. revision = 'dce488646127' down_revision = '39a96e67e990' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'secret_user_metadata') if not table_exists: op.create_table( 'secret_user_metadata', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('key', sa.String(length=255), nullable=False), sa.Column('value', sa.String(length=255), nullable=False), sa.Column('secret_id', sa.String(length=36), nullable=False), sa.ForeignKeyConstraint(['secret_id'], ['secrets.id'],), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('secret_id', 'key', name='_secret_key_uc') ) ././@LongLink0000000000000000000000000000017200000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/3c3b04040bfe_add_owning_project_and_creator_to_cas.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/3c3b04040bfe_add_owning_project_0000666000175100017510000000240713245511001033431 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add owning project and creator to CAs Revision ID: 3c3b04040bfe Revises: 156cd9933643 Create Date: 2015-09-04 12:22:22.745824 """ # revision identifiers, used by Alembic. revision = '3c3b04040bfe' down_revision = '156cd9933643' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('certificate_authorities', sa.Column('creator_id', sa.String(length=255), nullable=True)) op.add_column('certificate_authorities', sa.Column('project_id', sa.String(length=36), nullable=True)) op.create_foreign_key('cas_project_fk', 'certificate_authorities', 'projects', ['project_id'], ['id']) ././@LongLink0000000000000000000000000000017000000000000011213 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/47b69e523451_made_plugin_names_in_kek_datum_non_.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/47b69e523451_made_plugin_names_i0000666000175100017510000000167213245511001033222 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Made plugin names in kek datum non nullable Revision ID: 47b69e523451 Revises: cd4106a1a0 Create Date: 2014-06-16 14:05:45.428226 """ # revision identifiers, used by Alembic. revision = '47b69e523451' down_revision = 'cd4106a1a0' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column('kek_data', 'plugin_name', type_=sa.String(255), nullable=False) ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/46b98cde536_add_project_quotas_table.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/46b98cde536_add_project_quotas_t0000666000175100017510000000427513245511001033524 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add project quotas table Revision ID: 46b98cde536 Revises: 1bece815014f Create Date: 2015-08-28 17:42:35.057103 """ # revision identifiers, used by Alembic. revision = '46b98cde536' down_revision = 'kilo' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'project_quotas') if not table_exists: op.create_table( 'project_quotas', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('project_id', sa.String(length=36), nullable=False), sa.Column('secrets', sa.Integer(), nullable=True), sa.Column('orders', sa.Integer(), nullable=True), sa.Column('containers', sa.Integer(), nullable=True), sa.Column('transport_keys', sa.Integer(), nullable=True), sa.Column('consumers', sa.Integer(), nullable=True), sa.ForeignKeyConstraint(['project_id'], ['projects.id'], name='project_quotas_fk'), sa.PrimaryKeyConstraint('id'), mysql_engine='InnoDB') op.create_index( op.f('ix_project_quotas_project_id'), 'project_quotas', ['project_id'], unique=False) ././@LongLink0000000000000000000000000000017500000000000011220 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/161f8aceb687_fill_project_id_to_secrets_where_missing.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/161f8aceb687_fill_project_id_to_0000666000175100017510000000444413245511001033455 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """fill project_id to secrets where missing Revision ID: 161f8aceb687 Revises: 1bc885808c76 Create Date: 2015-06-22 15:58:03.131256 """ # revision identifiers, used by Alembic. revision = '161f8aceb687' down_revision = '1bc885808c76' from alembic import op import sqlalchemy as sa def _get_database_metadata(): con = op.get_bind() metadata = sa.MetaData(bind=con) metadata.reflect() return metadata def _drop_constraint(ctx, name, table): if ctx.dialect.name == 'mysql': # MySQL won't allow some operations with constraints in place op.drop_constraint(name, table, type_='foreignkey') def _create_constraint(ctx, name, tableone, tabletwo, columnone, columntwo): if ctx.dialect.name == 'mysql': # Recreate foreign key constraint op.create_foreign_key(name, tableone, tabletwo, columnone, columntwo) def upgrade(): metadata = _get_database_metadata() # Get relevant tables secrets = metadata.tables['secrets'] project_secret = metadata.tables['project_secret'] # Add project_id to the secrets op.execute(secrets.update(). values({'project_id': project_secret.c.project_id}). where(secrets.c.id == project_secret.c.secret_id). where(secrets.c.project_id == None) # noqa ) # Need to drop foreign key constraint before mysql will allow changes ctx = op.get_context() _drop_constraint(ctx, 'secrets_project_fk', 'secrets') # make project_id no longer nullable op.alter_column('secrets', 'project_id', type_=sa.String(36), nullable=False) # Create foreign key constraint again _create_constraint(ctx, 'secrets_project_fk', 'secrets', 'projects', ['project_id'], ['id']) ././@LongLink0000000000000000000000000000017100000000000011214 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/254495565185_removing_redundant_fields_from_order.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/254495565185_removing_redundant_0000666000175100017510000000213713245511001033137 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """removing redundant fields from order Revision ID: 254495565185 Revises: 2843d6469f25 Create Date: 2014-09-16 12:09:23.716390 """ # revision identifiers, used by Alembic. revision = '254495565185' down_revision = '2843d6469f25' from alembic import op def upgrade(): op.drop_column('orders', 'secret_mode') op.drop_column('orders', 'secret_algorithm') op.drop_column('orders', 'secret_bit_length') op.drop_column('orders', 'secret_expiration') op.drop_column('orders', 'secret_payload_content_type') op.drop_column('orders', 'secret_name') ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/2d21598e7e70_added_ca_related_tables.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/2d21598e7e70_added_ca_related_ta0000666000175100017510000001177313245511001033220 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Added CA related tables Revision ID: 2d21598e7e70 Revises: 3d36a26b88af Create Date: 2015-03-11 15:47:32.292944 """ # revision identifiers, used by Alembic. revision = '2d21598e7e70' down_revision = '3d36a26b88af' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'certificate_authorities') if not table_exists: op.create_table( 'certificate_authorities', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('plugin_name', sa.String(length=255), nullable=False), sa.Column('plugin_ca_id', sa.Text(), nullable=False), sa.Column('expiration', sa.DateTime(), nullable=True), sa.PrimaryKeyConstraint('id') ) table_exists = ctx.dialect.has_table( con.engine, 'project_certificate_authorities') if not table_exists: op.create_table( 'project_certificate_authorities', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('project_id', sa.String(length=36), nullable=False), sa.Column('ca_id', sa.String(length=36), nullable=False), sa.ForeignKeyConstraint(['ca_id'], ['certificate_authorities.id'],), sa.ForeignKeyConstraint(['project_id'], ['projects.id'],), sa.PrimaryKeyConstraint('id', 'project_id', 'ca_id'), sa.UniqueConstraint('project_id', 'ca_id', name='_project_certificate_authority_uc') ) table_exists = ctx.dialect.has_table( con.engine, 'certificate_authority_metadata') if not table_exists: op.create_table( 'certificate_authority_metadata', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('key', sa.String(length=255), nullable=False), sa.Column('value', sa.String(length=255), nullable=False), sa.Column('ca_id', sa.String(length=36), nullable=False), sa.ForeignKeyConstraint(['ca_id'], ['certificate_authorities.id'],), sa.PrimaryKeyConstraint('id', 'key', 'ca_id'), sa.UniqueConstraint('ca_id', 'key', name='_certificate_authority_metadatum_uc') ) table_exists = ctx.dialect.has_table( con.engine, 'preferred_certificate_authorities') if not table_exists: op.create_table( 'preferred_certificate_authorities', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('project_id', sa.String(length=36), nullable=False), sa.Column('ca_id', sa.String(length=36), nullable=True), sa.ForeignKeyConstraint(['ca_id'], ['certificate_authorities.id'],), sa.ForeignKeyConstraint(['project_id'], ['projects.id'],), sa.PrimaryKeyConstraint('id', 'project_id'), sa.UniqueConstraint('project_id') ) barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/juno_initial.py0000666000175100017510000000274513245511001030674 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """juno_initial Revision ID: juno Revises: None """ # revision identifiers, used by Alembic. revision = 'juno' down_revision = '1a0c2cdafb38' from barbican.model.migration.alembic_migrations import container_init_ops from barbican.model.migration.alembic_migrations import encrypted_init_ops from barbican.model.migration.alembic_migrations import kek_init_ops from barbican.model.migration.alembic_migrations import order_ops from barbican.model.migration.alembic_migrations import projects_init_ops from barbican.model.migration.alembic_migrations import secrets_init_ops from barbican.model.migration.alembic_migrations import transport_keys_init_ops def upgrade(): projects_init_ops.upgrade() secrets_init_ops.upgrade() container_init_ops.upgrade() kek_init_ops.upgrade() encrypted_init_ops.upgrade() order_ops.upgrade() transport_keys_init_ops.upgrade() ././@LongLink0000000000000000000000000000016700000000000011221 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/10220ccbe7fa_remove_transport_keys_column_from_.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/10220ccbe7fa_remove_transport_ke0000666000175100017510000000157713245511001033612 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Remove transport keys column from project quotas table Revision ID: 10220ccbe7fa Revises: 3c3b04040bfe Create Date: 2015-09-09 09:10:23.812681 """ # revision identifiers, used by Alembic. revision = '10220ccbe7fa' down_revision = '3c3b04040bfe' from alembic import op def upgrade(): op.drop_column('project_quotas', 'transport_keys') ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/cd4106a1a0_add_cert_to_container_type.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/cd4106a1a0_add_cert_to_container0000666000175100017510000000174413245511001033514 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add-cert-to-container-type Revision ID: cd4106a1a0 Revises: 1e86c18af2dd Create Date: 2014-06-10 15:07:25.084173 """ # revision identifiers, used by Alembic. revision = 'cd4106a1a0' down_revision = '1e86c18af2dd' from alembic import op import sqlalchemy as sa def upgrade(): enum_type = sa.Enum( 'generic', 'rsa', 'dsa', 'certificate', name='container_types') op.alter_column('containers', 'type', type_=enum_type) ././@LongLink0000000000000000000000000000016700000000000011221 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/6a4457517a3_rename_acl_creator_only_to_project_.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/6a4457517a3_rename_acl_creator_o0000666000175100017510000000304113245511001033265 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """rename ACL creator_only to project_access Revision ID: 6a4457517a3 Revises: 30dba269cc64 Create Date: 2015-06-03 11:54:55.187875 """ # revision identifiers, used by Alembic. revision = '6a4457517a3' down_revision = '30dba269cc64' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column('secret_acls', 'creator_only', existing_type=sa.BOOLEAN(), new_column_name='project_access') # reverse existing flag value as project_access is negation of creator_only op.execute('UPDATE secret_acls SET project_access = NOT project_access', execution_options={'autocommit': True}) op.alter_column('container_acls', 'creator_only', existing_type=sa.BOOLEAN(), new_column_name='project_access') # reverse existing flag value as project_access is negation of creator_only op.execute('UPDATE container_acls SET project_access = NOT project_access', execution_options={'autocommit': True}) ././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/2843d6469f25_add_sub_status_info_for_orders.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/2843d6469f25_add_sub_status_info0000666000175100017510000000214113245511001033260 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add sub status info for orders Revision ID: 2843d6469f25 Revises: 2ab3f5371bde Create Date: 2014-09-16 12:31:15.181380 """ # revision identifiers, used by Alembic. revision = '2843d6469f25' down_revision = '2ab3f5371bde' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('orders', sa.Column('sub_status', sa.String(length=36), nullable=True)) op.add_column('orders', sa.Column('sub_status_message', sa.String(length=255), nullable=True)) barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1a0c2cdafb38_initial_version.py0000666000175100017510000000141313245511001033416 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """create test table Revision ID: 1a0c2cdafb38 Revises: juno Create Date: 2013-06-17 16:42:13.634746 """ # revision identifiers, used by Alembic. revision = '1a0c2cdafb38' down_revision = None def upgrade(): pass ././@LongLink0000000000000000000000000000016600000000000011220 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/3d36a26b88af_add_order_barbican_metadata_table.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/3d36a26b88af_add_order_barbican_0000666000175100017510000000340713245511001033361 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add OrderBarbicanMetadata table Revision ID: 3d36a26b88af Revises: 443d6f4a69ac Create Date: 2015-02-20 12:27:08.155647 """ # revision identifiers, used by Alembic. revision = '3d36a26b88af' down_revision = '443d6f4a69ac' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'order_barbican_metadata') if not table_exists: op.create_table( 'order_barbican_metadata', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('order_id', sa.String(length=36), nullable=False), sa.Column('key', sa.String(length=255), nullable=False), sa.Column('value', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['order_id'], ['orders.id'], ), sa.PrimaryKeyConstraint('id') ) ././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1bc885808c76_add_project_id_to_secrets.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1bc885808c76_add_project_id_to_s0000666000175100017510000000222713245511001033304 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add project id to Secrets Revision ID: 1bc885808c76 Revises: 6a4457517a3 Create Date: 2015-04-24 13:53:29.926426 """ # revision identifiers, used by Alembic. revision = '1bc885808c76' down_revision = '6a4457517a3' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('secrets', sa.Column('project_id', sa.String(length=36), nullable=True)) op.create_index(op.f('ix_secrets_project_id'), 'secrets', ['project_id'], unique=False) op.create_foreign_key('secrets_project_fk', 'secrets', 'projects', ['project_id'], ['id']) barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/kilo_release.py0000666000175100017510000000156413245511001030644 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """kilo Revision ID: kilo Revises: 1bece815014f Create Date: 2015-08-26 00:00:00.000000 """ # revision identifiers, used by Alembic. revision = 'kilo' down_revision = '1bece815014f' def upgrade(): """A no-op migration for marking the Kilo release.""" pass ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/aa2cf96a1d5_add_orderretrytask.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/aa2cf96a1d5_add_orderretrytask.p0000666000175100017510000000265513245511001033601 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add OrderRetryTask Revision ID: aa2cf96a1d5 Revises: 256da65e0c5f Create Date: 2015-01-19 10:27:19.179196 """ # revision identifiers, used by Alembic. revision = "aa2cf96a1d5" down_revision = "256da65e0c5f" from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( "order_retry_tasks", sa.Column("id", sa.String(length=36), nullable=False), sa.Column("order_id", sa.String(length=36), nullable=False), sa.Column("retry_task", sa.Text(), nullable=False), sa.Column("retry_at", sa.DateTime(), nullable=False), sa.Column("retry_args", sa.Text(), nullable=False), sa.Column("retry_kwargs", sa.Text(), nullable=False), sa.Column("retry_count", sa.Integer(), nullable=False), sa.ForeignKeyConstraint(["order_id"], ["orders.id"]), sa.PrimaryKeyConstraint("id"), mysql_engine="InnoDB" ) ././@LongLink0000000000000000000000000000016700000000000011221 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/13d127569afa_create_secret_store_metadata_table.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/13d127569afa_create_secret_store0000666000175100017510000000357013245511001033417 0ustar zuulzuul00000000000000# Copyright (c) 2014 The Johns Hopkins University/Applied Physics Laboratory # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """create_secret_store_metadata_table Revision ID: 13d127569afa Revises: juno Create Date: 2014-04-24 13:15:41.858266 """ # revision identifiers, used by Alembic. revision = '13d127569afa' down_revision = 'juno' from alembic import op import sqlalchemy as sa def upgrade(): ctx = op.get_context() con = op.get_bind() table_exists = ctx.dialect.has_table(con.engine, 'secret_store_metadata') if not table_exists: op.create_table( 'secret_store_metadata', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('secret_id', sa.String(length=36), nullable=False), sa.Column('key', sa.String(length=255), nullable=False), sa.Column('value', sa.String(length=255), nullable=False), sa.ForeignKeyConstraint(['secret_id'], ['secrets.id'],), sa.PrimaryKeyConstraint('id'), ) ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/39a96e67e990_add_missing_constraints.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/39a96e67e990_add_missing_constra0000666000175100017510000000317213245511001033352 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add missing constraints Revision ID: 39a96e67e990 Revises: 4ecde3a3a72a Create Date: 2016-01-26 13:18:06.113621 """ # revision identifiers, used by Alembic. revision = '39a96e67e990' down_revision = '4ecde3a3a72a' from alembic import op import sqlalchemy as sa def upgrade(): # Add missing projects table keystone_id uniqueness constraint. op.create_unique_constraint('uc_projects_external_ids', 'projects', ['external_id']) # Add missing default for secret_acls' project_access. op.alter_column('secret_acls', 'project_access', server_default=sa.sql.expression.true(), existing_type=sa.Boolean, existing_server_default=None, existing_nullable=False) # Add missing default for container_acls' project_access. op.alter_column('container_acls', 'project_access', server_default=sa.sql.expression.true(), existing_type=sa.Boolean, existing_server_default=None, existing_nullable=False) ././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/795737bb3c3_change_tenants_to_projects.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/795737bb3c3_change_tenants_to_pr0000666000175100017510000000612213245511001033420 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Change tenants to projects Revision ID: 795737bb3c3 Revises: 254495565185 Create Date: 2014-12-09 15:58:35.535032 """ # revision identifiers, used by Alembic. revision = '795737bb3c3' down_revision = '254495565185' from alembic import op import sqlalchemy as sa def _drop_constraint(ctx, con, table, fk_name_to_try): if ctx.dialect.name == 'mysql': # MySQL creates different default names for foreign key constraints op.drop_constraint(fk_name_to_try, table, type_='foreignkey') def _change_fk_to_project(ctx, con, table, fk_old, fk_new): _drop_constraint(ctx, con, table, fk_old) op.alter_column(table, 'tenant_id', type_=sa.String(36), new_column_name='project_id') op.create_foreign_key(fk_new, table, 'projects', ['project_id'], ['id']) def upgrade(): # project_secret table ctx = op.get_context() con = op.get_bind() # ---- Update tenant_secret table to project_secret: _drop_constraint(ctx, con, 'tenant_secret', 'tenant_secret_ibfk_1') _drop_constraint(ctx, con, 'tenant_secret', 'tenant_secret_ibfk_2') op.drop_constraint('_tenant_secret_uc', 'tenant_secret', type_='unique') op.rename_table('tenant_secret', 'project_secret') op.alter_column('project_secret', 'tenant_id', type_=sa.String(36), new_column_name='project_id') op.create_unique_constraint('_project_secret_uc', 'project_secret', ['project_id', 'secret_id']) # ---- Update tenants table to projects: op.rename_table('tenants', 'projects') # re-create the foreign key constraints with explicit names. op.create_foreign_key('project_secret_project_fk', 'project_secret', 'projects', ['project_id'], ['id']) op.create_foreign_key('project_secret_secret_fk', 'project_secret', 'secrets', ['secret_id'], ['id']) # ---- Update containers table: _change_fk_to_project( ctx, con, 'containers', 'containers_ibfk_1', 'containers_project_fk') # ---- Update kek_data table: _change_fk_to_project( ctx, con, 'kek_data', 'kek_data_ibfk_1', 'kek_data_project_fk') # ---- Update orders table: _change_fk_to_project( ctx, con, 'orders', 'orders_ibfk_2', 'orders_project_fk') op.create_foreign_key('orders_ibfk_2', 'orders', 'containers', ['container_id'], ['id']) ././@LongLink0000000000000000000000000000016700000000000011221 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1c0f328bfce0_fixing_composite_primary_keys_and_.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/1c0f328bfce0_fixing_composite_pr0000666000175100017510000001100713245511001033565 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fixing composite primary keys and adding indexes to foreign key Revision ID: 1c0f328bfce0 Revises: 3d36a26b88af Create Date: 2015-03-04 17:09:41.479708 """ # revision identifiers, used by Alembic. revision = '1c0f328bfce0' down_revision = '2d21598e7e70' from alembic import op import sqlalchemy as sa def _drop_constraint(ctx, name, table): if ctx.dialect.name == 'mysql': # MySQL won't allow some operations with constraints in place op.drop_constraint(name, table, type_='foreignkey') def upgrade(): op.create_index(op.f('ix_certificate_authority_metadata_ca_id'), 'certificate_authority_metadata', ['ca_id'], unique=False) op.create_index(op.f('ix_certificate_authority_metadata_key'), 'certificate_authority_metadata', ['key'], unique=False) op.create_index(op.f('ix_container_consumer_metadata_container_id'), 'container_consumer_metadata', ['container_id'], unique=False) op.create_index(op.f('ix_container_secret_container_id'), 'container_secret', ['container_id'], unique=False) op.create_index(op.f('ix_container_secret_secret_id'), 'container_secret', ['secret_id'], unique=False) op.create_index(op.f('ix_containers_project_id'), 'containers', ['project_id'], unique=False) op.create_index(op.f('ix_encrypted_data_kek_id'), 'encrypted_data', ['kek_id'], unique=False) op.create_index(op.f('ix_encrypted_data_secret_id'), 'encrypted_data', ['secret_id'], unique=False) op.create_index(op.f('ix_kek_data_project_id'), 'kek_data', ['project_id'], unique=False) op.create_index(op.f('ix_order_barbican_metadata_order_id'), 'order_barbican_metadata', ['order_id'], unique=False) op.create_index(op.f('ix_order_plugin_metadata_order_id'), 'order_plugin_metadata', ['order_id'], unique=False) op.create_index(op.f('ix_order_retry_tasks_order_id'), 'order_retry_tasks', ['order_id'], unique=False) op.create_index(op.f('ix_orders_container_id'), 'orders', ['container_id'], unique=False) op.create_index(op.f('ix_orders_project_id'), 'orders', ['project_id'], unique=False) op.create_index(op.f('ix_orders_secret_id'), 'orders', ['secret_id'], unique=False) ctx = op.get_context() _drop_constraint(ctx, 'preferred_certificate_authorities_ibfk_1', 'preferred_certificate_authorities') op.alter_column('preferred_certificate_authorities', 'ca_id', existing_type=sa.VARCHAR(length=36), nullable=False) op.create_foreign_key('preferred_certificate_authorities_fk', 'preferred_certificate_authorities', 'certificate_authorities', ['ca_id'], ['id']) op.create_index(op.f('ix_preferred_certificate_authorities_ca_id'), 'preferred_certificate_authorities', ['ca_id'], unique=False) op.create_index(op.f('ix_preferred_certificate_authorities_project_id'), 'preferred_certificate_authorities', ['project_id'], unique=True) op.create_index(op.f('ix_project_certificate_authorities_ca_id'), 'project_certificate_authorities', ['ca_id'], unique=False) op.create_index(op.f('ix_project_certificate_authorities_project_id'), 'project_certificate_authorities', ['project_id'], unique=False) op.create_index(op.f('ix_project_secret_project_id'), 'project_secret', ['project_id'], unique=False) op.create_index(op.f('ix_project_secret_secret_id'), 'project_secret', ['secret_id'], unique=False) op.create_index(op.f('ix_secret_store_metadata_secret_id'), 'secret_store_metadata', ['secret_id'], unique=False) ././@LongLink0000000000000000000000000000016700000000000011221 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/2ab3f5371bde_dsa_in_container_type_modelbase_to.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/2ab3f5371bde_dsa_in_container_ty0000666000175100017510000000331513245511001033536 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """dsa in container type modelbase_to Revision ID: 2ab3f5371bde Revises: 4070806f6972 Create Date: 2014-09-02 12:11:43.524247 """ # revision identifiers, used by Alembic. revision = '2ab3f5371bde' down_revision = '4070806f6972' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('container_secret', sa.Column('created_at', sa.DateTime(), nullable=False)) op.add_column('container_secret', sa.Column('deleted', sa.Boolean(), nullable=False)) op.add_column('container_secret', sa.Column('deleted_at', sa.DateTime(), nullable=True)) op.add_column('container_secret', sa.Column('id', sa.String(length=36), nullable=False)) op.add_column('container_secret', sa.Column('status', sa.String(length=20), nullable=False)) op.add_column('container_secret', sa.Column('updated_at', sa.DateTime(), nullable=False)) op.create_primary_key('pk_container_secret', 'container_secret', ['id']) op.create_unique_constraint( '_container_secret_name_uc', 'container_secret', ['container_id', 'secret_id', 'name'] ) ././@LongLink0000000000000000000000000000017100000000000011214 Lustar 00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/versions/156cd9933643_add_project_column_to_consumer_table.pybarbican-6.0.0/barbican/model/migration/alembic_migrations/versions/156cd9933643_add_project_column_0000666000175100017510000000244113245511001033233 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add project column to consumer table Revision ID: 156cd9933643 Revises: 46b98cde536 Create Date: 2015-08-28 20:53:23.205128 """ # revision identifiers, used by Alembic. revision = '156cd9933643' down_revision = '46b98cde536' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'container_consumer_metadata', sa.Column('project_id', sa.String(length=36), nullable=True)) op.create_index( op.f('ix_container_consumer_metadata_project_id'), 'container_consumer_metadata', ['project_id'], unique=False) op.create_foreign_key( None, 'container_consumer_metadata', 'projects', ['project_id'], ['id']) barbican-6.0.0/barbican/model/migration/alembic_migrations/container_init_ops.py0000666000175100017510000000577013245511001030227 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Initial operations for agent management extension # This module only manages the 'agents' table. Binding tables are created # in the modules for relevant resources from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'containers', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('type', sa.Enum('generic', 'rsa', 'dsa', 'certificate', name='container_types'), nullable=True), sa.Column('tenant_id', sa.String(length=36), nullable=False), sa.ForeignKeyConstraint(['tenant_id'], ['tenants.id'],), sa.PrimaryKeyConstraint('id') ) op.create_table( 'container_consumer_metadata', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('container_id', sa.String(length=36), nullable=False), sa.Column('URL', sa.String(length=255), nullable=True), sa.Column('data_hash', sa.CHAR(64), nullable=True), sa.ForeignKeyConstraint(['container_id'], ['containers.id'],), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('data_hash', name='_consumer_hashed_container_name_url_uc'), sa.Index('values_index', 'container_id', 'name', 'URL') ) op.create_table( 'container_secret', sa.Column('name', sa.String(length=255), nullable=True), sa.Column('container_id', sa.String(length=36), nullable=False), sa.Column('secret_id', sa.String(length=36), nullable=False), sa.ForeignKeyConstraint(['container_id'], ['containers.id'],), sa.ForeignKeyConstraint(['secret_id'], ['secrets.id'],) ) barbican-6.0.0/barbican/model/migration/alembic_migrations/env.py0000666000175100017510000000553513245511001025130 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import with_statement from alembic import context from oslo_db.sqlalchemy import session from barbican.model import models # this is the Alembic Config object, which provides # access to the values within the .ini file in use. # Note that the 'config' instance is not available in for unit testing. try: config = context.config except Exception: config = None # WARNING! The following was autogenerated by Alembic as part of setting up # the initial environment. Unfortunately it also **clobbers** the logging # for the rest of this application, so please do not use it! # Interpret the config file for Python logging. # This line sets up loggers basically. # fileConfig(config.config_file_name) # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata target_metadata = models.BASE.metadata # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def get_sqlalchemy_url(): return (config.barbican_sqlalchemy_url or config.get_main_option("sqlalchemy.url")) def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ context.configure(url=get_sqlalchemy_url()) with context.begin_transaction(): context.run_migrations() def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ engine = session.create_engine( get_sqlalchemy_url()) connection = engine.connect() context.configure( connection=connection, target_metadata=target_metadata ) try: with context.begin_transaction(): context.run_migrations() finally: connection.close() if config: if context.is_offline_mode(): run_migrations_offline() else: run_migrations_online() barbican-6.0.0/barbican/model/migration/alembic_migrations/order_ops.py0000666000175100017510000000424713245511001026333 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Initial operations for agent management extension # This module only manages the 'agents' table. Binding tables are created # in the modules for relevant resources from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'orders', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('tenant_id', sa.String(length=36), nullable=False), sa.Column('error_status_code', sa.String(length=16), nullable=True), sa.Column('error_reason', sa.String(length=255), nullable=True), sa.Column('secret_id', sa.String(length=36), nullable=True), sa.Column('secret_mode', sa.String(length=255), nullable=True), sa.Column('secret_algorithm', sa.String(length=255), nullable=True), sa.Column('secret_bit_length', sa.String(length=255), nullable=True), sa.Column('secret_expiration', sa.String(length=255), nullable=True), sa.Column('secret_payload_content_type', sa.String(length=255), nullable=True), sa.Column('secret_name', sa.String(length=255), nullable=True), sa.ForeignKeyConstraint(['secret_id'], ['secrets.id'], ), sa.ForeignKeyConstraint(['tenant_id'], ['tenants.id'], ), sa.PrimaryKeyConstraint('id') ) barbican-6.0.0/barbican/model/migration/alembic_migrations/secrets_init_ops.py0000666000175100017510000000402413245511001027704 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Initial operations for agent management extension # This module only manages the 'agents' table. Binding tables are created # in the modules for relevant resources from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'secrets', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('expiration', sa.DateTime(), nullable=True), sa.Column('algorithm', sa.String(length=255), nullable=True), sa.Column('bit_length', sa.Integer(), nullable=True), sa.Column('mode', sa.String(length=255), nullable=True), sa.PrimaryKeyConstraint('id') ) op.create_table( 'tenant_secret', sa.Column('tenant_id', sa.String(length=36), nullable=False), sa.Column('secret_id', sa.String(length=36), nullable=False), sa.ForeignKeyConstraint(['tenant_id'], ['tenants.id'],), sa.ForeignKeyConstraint(['secret_id'], ['secrets.id'],), sa.UniqueConstraint('tenant_id', 'secret_id', name='_tenant_secret_uc') ) barbican-6.0.0/barbican/model/migration/alembic_migrations/encrypted_init_ops.py0000666000175100017510000000336713245511001030242 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Initial operations for agent management extension # This module only manages the 'agents' table. Binding tables are created # in the modules for relevant resources from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'encrypted_data', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('content_type', sa.String(length=255), nullable=True), sa.Column('secret_id', sa.String(length=36), nullable=False), sa.Column('kek_id', sa.String(length=36), nullable=False), sa.Column('cypher_text', sa.Text(), nullable=True), sa.Column('kek_meta_extended', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['secret_id'], ['secrets.id'],), sa.ForeignKeyConstraint(['kek_id'], ['kek_data.id'],), sa.PrimaryKeyConstraint('id') ) barbican-6.0.0/barbican/model/migration/alembic_migrations/transport_keys_init_ops.py0000666000175100017510000000265613245511001031334 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Initial operations for agent management extension # This module only manages the 'agents' table. Binding tables are created # in the modules for relevant resources from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'transport_keys', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('plugin_name', sa.String(length=255), nullable=False), sa.Column('transport_key', sa.Text(), nullable=True), sa.PrimaryKeyConstraint('id') ) barbican-6.0.0/barbican/model/migration/alembic_migrations/README0000666000175100017510000000004713245511001024637 0ustar zuulzuul00000000000000Generic single-database configuration. barbican-6.0.0/barbican/model/migration/alembic_migrations/__init__.py0000666000175100017510000000000013245511001026055 0ustar zuulzuul00000000000000barbican-6.0.0/barbican/model/migration/alembic_migrations/kek_init_ops.py0000666000175100017510000000366613245511001027021 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Initial operations for agent management extension # This module only manages the 'agents' table. Binding tables are created # in the modules for relevant resources from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'kek_data', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('plugin_name', sa.String(length=255), nullable=False), sa.Column('kek_label', sa.String(length=255), nullable=True), sa.Column('tenant_id', sa.String(length=36), nullable=False), sa.Column('active', sa.Boolean(), nullable=False), sa.Column('bind_completed', sa.Boolean(), nullable=False), sa.Column('algorithm', sa.String(length=255), nullable=True), sa.Column('bit_length', sa.Integer(), nullable=True), sa.Column('mode', sa.String(length=255), nullable=True), sa.Column('plugin_meta', sa.Text(), nullable=True), sa.ForeignKeyConstraint(['tenant_id'], ['tenants.id'],), sa.PrimaryKeyConstraint('id') ) barbican-6.0.0/barbican/model/migration/alembic_migrations/script.py.mako0000666000175100017510000000172313245511001026565 0ustar zuulzuul00000000000000# Copyright ${create_date.year} OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} barbican-6.0.0/barbican/model/migration/alembic_migrations/projects_init_ops.py0000666000175100017510000000255013245511001030067 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Initial operations for agent management extension # This module only manages the 'agents' table. Binding tables are created # in the modules for relevant resources from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'tenants', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('updated_at', sa.DateTime(), nullable=False), sa.Column('deleted_at', sa.DateTime(), nullable=True), sa.Column('deleted', sa.Boolean(), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.Column('keystone_id', sa.String(length=255), nullable=True), sa.PrimaryKeyConstraint('id') ) barbican-6.0.0/barbican/model/migration/alembic.ini0000666000175100017510000000241113245511001022221 0ustar zuulzuul00000000000000# A generic, single database configuration [alembic] # path to migration scripts script_location = %(here)s/alembic_migrations # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # default to an empty string because the Barbican migration process will # extract the correct value and set it programmatically before alembic is fully # invoked. sqlalchemy.url = #sqlalchemy.url = driver://user:pass@localhost/dbname #sqlalchemy.url = sqlite:///barbican.sqlite #sqlalchemy.url = sqlite:////var/lib/barbican/barbican.sqlite #sqlalchemy.url = postgresql+psycopg2://postgres:postgres@localhost:5432/barbican_api # Logging configuration [loggers] keys = alembic #keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = DEBUG handlers = console qualname = [logger_sqlalchemy] level = DEBUG handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S barbican-6.0.0/barbican/model/migration/__init__.py0000666000175100017510000000000013245511001022225 0ustar zuulzuul00000000000000barbican-6.0.0/barbican/model/migration/commands.py0000666000175100017510000000547513245511001022314 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Interface to the Alembic migration process and environment. Concepts in this file are based on Quantum's Alembic approach. Available Alembic commands are detailed here: https://alembic.readthedocs.org/en/latest/api.html#module-alembic.command """ import os from alembic import command as alembic_command from alembic import config as alembic_config from barbican.common import config from barbican.common import utils LOG = utils.getLogger(__name__) CONF = config.CONF def init_config(sql_url=None): """Initialize and return the Alembic configuration.""" sqlalchemy_url = sql_url or CONF.sql_connection if not sqlalchemy_url: raise RuntimeError("Please specify a SQLAlchemy-friendly URL to " "connect to the proper database, either through " "the CLI or the configuration file.") if sqlalchemy_url and 'sqlite' in sqlalchemy_url: LOG.warning('!!! Limited support for migration commands using' ' sqlite databases; This operation may not succeed.') config = alembic_config.Config( os.path.join(os.path.dirname(__file__), 'alembic.ini') ) config.barbican_sqlalchemy_url = sqlalchemy_url config.set_main_option('script_location', 'barbican.model.migration:alembic_migrations') return config def upgrade(to_version='head', sql_url=None): """Upgrade to the specified version.""" alembic_cfg = init_config(sql_url) alembic_command.upgrade(alembic_cfg, to_version) def history(verbose, sql_url=None): alembic_cfg = init_config(sql_url) alembic_command.history(alembic_cfg, verbose=verbose) def current(verbose, sql_url=None): alembic_cfg = init_config(sql_url) alembic_command.current(alembic_cfg, verbose=verbose) def stamp(to_version='head', sql_url=None): """Stamp the specified version, with no migration performed.""" alembic_cfg = init_config(sql_url) alembic_command.stamp(alembic_cfg, to_version) def generate(autogenerate=True, message='generate changes', sql_url=None): """Generate a version file.""" alembic_cfg = init_config(sql_url) alembic_command.revision(alembic_cfg, message=message, autogenerate=autogenerate) barbican-6.0.0/barbican/version.py0000666000175100017510000000135613245511001017101 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version version_info = pbr.version.VersionInfo('barbican') __version__ = version_info.release_string() barbican-6.0.0/barbican/common/0000775000175100017510000000000013245511177016341 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/common/config.py0000666000175100017510000003624713245511001020160 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Configuration setup for Barbican. """ import logging import os from oslo_config import cfg from oslo_log import log from oslo_middleware import cors from oslo_service import _options from barbican import i18n as u import barbican.version MAX_BYTES_REQUEST_INPUT_ACCEPTED = 15000 DEFAULT_MAX_SECRET_BYTES = 10000 KS_NOTIFICATIONS_GRP_NAME = 'keystone_notifications' context_opts = [ cfg.StrOpt('admin_role', default='admin', help=u._('Role used to identify an authenticated user as ' 'administrator.')), cfg.BoolOpt('allow_anonymous_access', default=False, help=u._('Allow unauthenticated users to access the API with ' 'read-only privileges. This only applies when using ' 'ContextMiddleware.')), ] common_opts = [ cfg.IntOpt('max_allowed_request_size_in_bytes', default=MAX_BYTES_REQUEST_INPUT_ACCEPTED, help=u._("Maximum allowed http request size against the " "barbican-api.")), cfg.IntOpt('max_allowed_secret_in_bytes', default=DEFAULT_MAX_SECRET_BYTES, help=u._("Maximum allowed secret size in bytes.")), ] host_opts = [ cfg.StrOpt('host_href', default='http://localhost:9311', help=u._("Host name, for use in HATEOAS-style references Note: " "Typically this would be the load balanced endpoint " "that clients would use to communicate back with this " "service. If a deployment wants to derive host from " "wsgi request instead then make this blank. Blank is " "needed to override default config value which is " "'http://localhost:9311'")), ] db_opts = [ cfg.StrOpt('sql_connection', default="sqlite:///barbican.sqlite", secret=True, help=u._("SQLAlchemy connection string for the reference " "implementation registry server. Any valid " "SQLAlchemy connection string is fine. See: " "http://www.sqlalchemy.org/docs/05/reference/" "sqlalchemy/connections.html#sqlalchemy." "create_engine. Note: For absolute addresses, use " "'////' slashes after 'sqlite:'.")), cfg.IntOpt('sql_idle_timeout', default=3600, help=u._("Period in seconds after which SQLAlchemy should " "reestablish its connection to the database. MySQL " "uses a default `wait_timeout` of 8 hours, after " "which it will drop idle connections. This can result " "in 'MySQL Gone Away' exceptions. If you notice this, " "you can lower this value to ensure that SQLAlchemy " "reconnects before MySQL can drop the connection.")), cfg.IntOpt('sql_max_retries', default=60, help=u._("Maximum number of database connection retries " "during startup. Set to -1 to specify an infinite " "retry count.")), cfg.IntOpt('sql_retry_interval', default=1, help=u._("Interval between retries of opening a SQL " "connection.")), cfg.BoolOpt('db_auto_create', default=True, help=u._("Create the Barbican database on service startup.")), cfg.IntOpt('max_limit_paging', default=100, help=u._("Maximum page size for the 'limit' paging URL " "parameter.")), cfg.IntOpt('default_limit_paging', default=10, help=u._("Default page size for the 'limit' paging URL " "parameter.")), cfg.StrOpt('sql_pool_class', default="QueuePool", help=u._("Accepts a class imported from the sqlalchemy.pool " "module, and handles the details of building the " "pool for you. If commented out, SQLAlchemy will " "select based on the database dialect. Other options " "are QueuePool (for SQLAlchemy-managed connections) " "and NullPool (to disabled SQLAlchemy management of " "connections). See http://docs.sqlalchemy.org/en/" "latest/core/pooling.html for more details")), cfg.BoolOpt('sql_pool_logging', default=False, help=u._("Show SQLAlchemy pool-related debugging output in " "logs (sets DEBUG log level output) if specified.")), cfg.IntOpt('sql_pool_size', default=5, help=u._("Size of pool used by SQLAlchemy. This is the largest " "number of connections that will be kept persistently " "in the pool. Can be set to 0 to indicate no size " "limit. To disable pooling, use a NullPool with " "sql_pool_class instead. Comment out to allow " "SQLAlchemy to select the default.")), cfg.IntOpt('sql_pool_max_overflow', default=10, help=u._("# The maximum overflow size of the pool used by " "SQLAlchemy. When the number of checked-out " "connections reaches the size set in sql_pool_size, " "additional connections will be returned up to this " "limit. It follows then that the total number of " "simultaneous connections the pool will allow is " "sql_pool_size + sql_pool_max_overflow. Can be set " "to -1 to indicate no overflow limit, so no limit " "will be placed on the total number of concurrent " "connections. Comment out to allow SQLAlchemy to " "select the default.")), ] retry_opt_group = cfg.OptGroup(name='retry_scheduler', title='Retry/Scheduler Options') retry_opts = [ cfg.FloatOpt( 'initial_delay_seconds', default=10.0, help=u._('Seconds (float) to wait before starting retry scheduler')), cfg.FloatOpt( 'periodic_interval_max_seconds', default=10.0, help=u._('Seconds (float) to wait between periodic schedule events')), ] queue_opt_group = cfg.OptGroup(name='queue', title='Queue Application Options') queue_opts = [ cfg.BoolOpt('enable', default=False, help=u._('True enables queuing, False invokes ' 'workers synchronously')), cfg.StrOpt('namespace', default='barbican', help=u._('Queue namespace')), cfg.StrOpt('topic', default='barbican.workers', help=u._('Queue topic name')), cfg.StrOpt('version', default='1.1', help=u._('Version of tasks invoked via queue')), cfg.StrOpt('server_name', default='barbican.queue', help=u._('Server name for RPC task processing server')), cfg.IntOpt('asynchronous_workers', default=1, help=u._('Number of asynchronous worker processes')), ] ks_queue_opt_group = cfg.OptGroup(name=KS_NOTIFICATIONS_GRP_NAME, title='Keystone Notification Options') ks_queue_opts = [ cfg.BoolOpt('enable', default=False, help=u._('True enables keystone notification listener ' ' functionality.')), cfg.StrOpt('control_exchange', default='openstack', help=u._('The default exchange under which topics are scoped. ' 'May be overridden by an exchange name specified in ' 'the transport_url option.')), cfg.StrOpt('topic', default='notifications', help=u._("Keystone notification queue topic name. This name " "needs to match one of values mentioned in Keystone " "deployment's 'notification_topics' configuration " "e.g." " notification_topics=notifications, " " barbican_notifications" "Multiple servers may listen on a topic and messages " "will be dispatched to one of the servers in a " "round-robin fashion. That's why Barbican service " "should have its own dedicated notification queue so " "that it receives all of Keystone notifications.")), cfg.BoolOpt('allow_requeue', default=False, help=u._('True enables requeue feature in case of notification' ' processing error. Enable this only when underlying ' 'transport supports this feature.')), cfg.StrOpt('version', default='1.0', help=u._('Version of tasks invoked via notifications')), cfg.IntOpt('thread_pool_size', default=10, help=u._('Define the number of max threads to be used for ' 'notification server processing functionality.')), ] quota_opt_group = cfg.OptGroup(name='quotas', title='Quota Options') quota_opts = [ cfg.IntOpt('quota_secrets', default=-1, help=u._('Number of secrets allowed per project')), cfg.IntOpt('quota_orders', default=-1, help=u._('Number of orders allowed per project')), cfg.IntOpt('quota_containers', default=-1, help=u._('Number of containers allowed per project')), cfg.IntOpt('quota_consumers', default=-1, help=u._('Number of consumers allowed per project')), cfg.IntOpt('quota_cas', default=-1, help=u._('Number of CAs allowed per project')) ] def list_opts(): yield None, context_opts yield None, common_opts yield None, host_opts yield None, db_opts yield None, _options.eventlet_backdoor_opts yield retry_opt_group, retry_opts yield queue_opt_group, queue_opts yield ks_queue_opt_group, ks_queue_opts yield quota_opt_group, quota_opts # Flag to indicate barbican configuration is already parsed once or not _CONFIG_PARSED_ONCE = False def parse_args(conf, args=None, usage=None, default_config_files=None): global _CONFIG_PARSED_ONCE conf(args=args if args else [], project='barbican', prog='barbican', version=barbican.version.__version__, usage=usage, default_config_files=default_config_files) conf.pydev_debug_host = os.environ.get('PYDEV_DEBUG_HOST') conf.pydev_debug_port = os.environ.get('PYDEV_DEBUG_PORT') # Assign cfg.CONF handle to parsed barbican configuration once at startup # only. No need to keep re-assigning it with separate plugin conf usage if not _CONFIG_PARSED_ONCE: cfg.CONF = conf _CONFIG_PARSED_ONCE = True def new_config(): conf = cfg.ConfigOpts() log.register_options(conf) conf.register_opts(context_opts) conf.register_opts(common_opts) conf.register_opts(host_opts) conf.register_opts(db_opts) conf.register_opts(_options.eventlet_backdoor_opts) conf.register_opts(_options.periodic_opts) conf.register_opts(_options.ssl_opts, "ssl") conf.register_group(retry_opt_group) conf.register_opts(retry_opts, group=retry_opt_group) conf.register_group(queue_opt_group) conf.register_opts(queue_opts, group=queue_opt_group) conf.register_group(ks_queue_opt_group) conf.register_opts(ks_queue_opts, group=ks_queue_opt_group) conf.register_group(quota_opt_group) conf.register_opts(quota_opts, group=quota_opt_group) # Update default values from libraries that carry their own oslo.config # initialization and configuration. set_middleware_defaults() return conf def setup_remote_pydev_debug(): """Required setup for remote debugging.""" if CONF.pydev_debug_host and CONF.pydev_debug_port: try: try: from pydev import pydevd except ImportError: import pydevd pydevd.settrace(CONF.pydev_debug_host, port=int(CONF.pydev_debug_port), stdoutToServer=True, stderrToServer=True) except Exception: LOG.exception('Unable to join debugger, please ' 'make sure that the debugger processes is ' 'listening on debug-host \'%(debug-host)s\' ' 'debug-port \'%(debug-port)s\'.', {'debug-host': CONF.pydev_debug_host, 'debug-port': CONF.pydev_debug_port}) raise def set_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Project-Id', 'X-Identity-Status', 'X-User-Id', 'X-Storage-Token', 'X-Domain-Id', 'X-User-Domain-Id', 'X-Project-Domain-Id', 'X-Roles'], expose_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Project-Id', 'X-Identity-Status', 'X-User-Id', 'X-Storage-Token', 'X-Domain-Id', 'X-User-Domain-Id', 'X-Project-Domain-Id', 'X-Roles'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) CONF = new_config() LOG = logging.getLogger(__name__) parse_args(CONF) # Adding global scope dict for all different configs created in various # modules. In barbican, each plugin module creates its own *new* config # instance so its error prone to share/access config values across modules # as these module imports introduce a cyclic dependency. To avoid this, each # plugin can set this dict after its own config instance is created and parsed. _CONFIGS = {} def set_module_config(name, module_conf): """Each plugin can set its own conf instance with its group name.""" _CONFIGS[name] = module_conf def get_module_config(name): """Get handle to plugin specific config instance by its group name.""" return _CONFIGS[name] barbican-6.0.0/barbican/common/policy.py0000666000175100017510000000376613245511001020212 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_policy import policy from barbican.common import config from barbican.common import policies CONF = config.CONF ENFORCER = None # oslo_policy will read the policy configuration file again when the file # is changed in runtime so the old policy rules will be saved to # saved_file_rules and used to compare with new rules to determine the # rules whether were updated. saved_file_rules = [] def reset(): global ENFORCER if ENFORCER: ENFORCER.clear() ENFORCER = None def init(): global ENFORCER global saved_file_rules if not ENFORCER: ENFORCER = policy.Enforcer(CONF) register_rules(ENFORCER) ENFORCER.load_rules() # Only the rules which are loaded from file may be changed. current_file_rules = ENFORCER.file_rules current_file_rules = _serialize_rules(current_file_rules) # Checks whether the rules are updated in the runtime if saved_file_rules != current_file_rules: saved_file_rules = copy.deepcopy(current_file_rules) def _serialize_rules(rules): """Serialize all the Rule object as string.""" result = [(rule_name, str(rule)) for rule_name, rule in rules.items()] return sorted(result, key=lambda rule: rule[0]) def register_rules(enforcer): enforcer.register_defaults(policies.list_rules()) def get_enforcer(): init() return ENFORCER barbican-6.0.0/barbican/common/resources.py0000666000175100017510000000367513245511001020724 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Shared business logic. """ from barbican.common import exception from barbican.common import utils from barbican.model import models from barbican.model import repositories LOG = utils.getLogger(__name__) GLOBAL_PREFERRED_PROJECT_ID = "GLOBAL_PREFERRED" def get_or_create_global_preferred_project(): return get_or_create_project(GLOBAL_PREFERRED_PROJECT_ID) def get_or_create_project(project_id): """Returns project with matching project_id. Creates it if it does not exist. :param project_id: The external-to-Barbican ID for this project. :param project_repo: Project repository. :return: Project model instance """ project_repo = repositories.get_project_repository() project = project_repo.find_by_external_project_id(project_id, suppress_exception=True) if not project: LOG.debug('Creating project for %s', project_id) project = models.Project() project.external_id = project_id project.status = models.States.ACTIVE try: project_repo.create_from(project) except exception.ConstraintCheck: # catch race condition for when another thread just created one project = project_repo.find_by_external_project_id( project_id, suppress_exception=False) return project barbican-6.0.0/barbican/common/validators.py0000666000175100017510000011251413245511001021053 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ API JSON validators. """ import abc import base64 import re import jsonschema as schema from ldap3.core import exceptions as ldap_exceptions from ldap3.utils.dn import parse_dn from OpenSSL import crypto from oslo_utils import timeutils import six from barbican.api import controllers from barbican.common import config from barbican.common import exception from barbican.common import hrefs from barbican.common import utils from barbican import i18n as u from barbican.model import models from barbican.model import repositories as repo from barbican.plugin.interface import secret_store from barbican.plugin.util import mime_types DEFAULT_MAX_SECRET_BYTES = config.DEFAULT_MAX_SECRET_BYTES LOG = utils.getLogger(__name__) CONF = config.CONF MYSQL_SMALL_INT_MAX = 32767 ACL_OPERATIONS = ['read', 'write', 'delete', 'list'] def secret_too_big(data): if isinstance(data, six.text_type): return len(data.encode('UTF-8')) > CONF.max_allowed_secret_in_bytes else: return len(data) > CONF.max_allowed_secret_in_bytes def get_invalid_property(validation_error): # we are interested in the second item which is the failed propertyName. if validation_error.schema_path and len(validation_error.schema_path) > 1: return validation_error.schema_path[1] def validate_ca_id(project_id, order_meta): ca_id = order_meta.get('ca_id') if not ca_id: return ca_repo = repo.get_ca_repository() ca = ca_repo.get(ca_id, suppress_exception=True) if not ca: raise exception.InvalidCAID(ca_id=ca_id) if ca.project_id and ca.project_id != project_id: raise exception.UnauthorizedSubCA() project_ca_repo = repo.get_project_ca_repository() project_cas, offset, limit, total = project_ca_repo.get_by_create_date( project_id=project_id, suppress_exception=True ) if total < 1: return for project_ca in project_cas: if ca.id == project_ca.ca_id: return raise exception.CANotDefinedForProject( ca_id=ca_id, project_id=project_id) def validate_stored_key_rsa_container(project_id, container_ref, req): try: container_id = hrefs.get_container_id_from_ref(container_ref) except Exception: reason = u._("Bad Container Reference {ref}").format( ref=container_ref ) raise exception.InvalidContainer(reason=reason) container_repo = repo.get_container_repository() container = container_repo.get_container_by_id(entity_id=container_id, suppress_exception=True) if not container: reason = u._("Container Not Found") raise exception.InvalidContainer(reason=reason) if container.type != 'rsa': reason = u._("Container Wrong Type") raise exception.InvalidContainer(reason=reason) ctxt = controllers._get_barbican_context(req) inst = controllers.containers.ContainerController(container) controllers._do_enforce_rbac(inst, req, controllers.containers.CONTAINER_GET, ctxt) @six.add_metaclass(abc.ABCMeta) class ValidatorBase(object): """Base class for validators.""" name = '' @abc.abstractmethod def validate(self, json_data, parent_schema=None): """Validate the input JSON. :param json_data: JSON to validate against this class' internal schema. :param parent_schema: Name of the parent schema to this schema. :returns: dict -- JSON content, post-validation and : normalization/defaulting. :raises: schema.ValidationError on schema violations. """ def _full_name(self, parent_schema=None): """Validator schema name accessor Returns the full schema name for this validator, including parent name. """ schema_name = self.name if parent_schema: schema_name = u._( "{schema_name}' within '{parent_schema_name}").format( schema_name=self.name, parent_schema_name=parent_schema) return schema_name def _assert_schema_is_valid(self, json_data, schema_name): """Assert that the JSON structure is valid for the given schema. :raises: InvalidObject exception if the data is not schema compliant. """ try: schema.validate(json_data, self.schema) except schema.ValidationError as e: raise exception.InvalidObject(schema=schema_name, reason=e.message, property=get_invalid_property(e)) def _assert_validity(self, valid_condition, schema_name, message, property): """Assert that a certain condition is met. :raises: InvalidObject exception if the condition is not met. """ if not valid_condition: raise exception.InvalidObject(schema=schema_name, reason=message, property=property) class NewSecretValidator(ValidatorBase): """Validate a new secret.""" def __init__(self): self.name = 'Secret' # TODO(jfwood): Get the list of mime_types from the crypto plugins? self.schema = { "type": "object", "properties": { "name": {"type": ["string", "null"], "maxLength": 255}, "algorithm": {"type": "string", "maxLength": 255}, "mode": {"type": "string", "maxLength": 255}, "bit_length": { "type": "integer", "minimum": 1, "maximum": MYSQL_SMALL_INT_MAX }, "expiration": {"type": "string", "maxLength": 255}, "payload": {"type": "string"}, "secret_type": { "type": "string", "maxLength": 80, "enum": [secret_store.SecretType.SYMMETRIC, secret_store.SecretType.PASSPHRASE, secret_store.SecretType.PRIVATE, secret_store.SecretType.PUBLIC, secret_store.SecretType.CERTIFICATE, secret_store.SecretType.OPAQUE] }, "payload_content_type": { "type": ["string", "null"], "maxLength": 255 }, "payload_content_encoding": { "type": "string", "maxLength": 255, "enum": [ "base64" ] }, "transport_key_needed": { "type": "string", "enum": ["true", "false"] }, "transport_key_id": {"type": "string"}, }, } def validate(self, json_data, parent_schema=None): """Validate the input JSON for the schema for secrets.""" schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) json_data['name'] = self._extract_name(json_data) expiration = self._extract_expiration(json_data, schema_name) self._assert_expiration_is_valid(expiration, schema_name) json_data['expiration'] = expiration content_type = json_data.get('payload_content_type') if 'payload' in json_data: content_encoding = json_data.get('payload_content_encoding') self._validate_content_parameters(content_type, content_encoding, schema_name) payload = self._extract_payload(json_data) self._assert_validity(payload, schema_name, u._("If 'payload' specified, must be non " "empty"), "payload") self._validate_payload_by_content_encoding(content_encoding, payload, schema_name) json_data['payload'] = payload elif 'payload_content_type' in json_data: # parent_schema would be populated if it comes from an order. self._assert_validity(parent_schema is not None, schema_name, u._("payload must be provided when " "payload_content_type is specified"), "payload") if content_type: self._assert_validity( mime_types.is_supported(content_type), schema_name, u._("payload_content_type is not one of {supported}" ).format(supported=mime_types.SUPPORTED), "payload_content_type") return json_data def _extract_name(self, json_data): """Extracts and returns the name from the JSON data.""" name = json_data.get('name') if isinstance(name, six.string_types): return name.strip() return None def _extract_expiration(self, json_data, schema_name): """Extracts and returns the expiration date from the JSON data.""" expiration = None expiration_raw = json_data.get('expiration') if expiration_raw and expiration_raw.strip(): try: expiration_tz = timeutils.parse_isotime(expiration_raw.strip()) expiration = timeutils.normalize_time(expiration_tz) except ValueError: LOG.exception("Problem parsing expiration date") raise exception.InvalidObject( schema=schema_name, reason=u._("Invalid date for 'expiration'"), property="expiration") return expiration def _assert_expiration_is_valid(self, expiration, schema_name): """Asserts that the given expiration date is valid. Expiration dates must be in the future, not the past. """ if expiration: # Verify not already expired. utcnow = timeutils.utcnow() self._assert_validity(expiration > utcnow, schema_name, u._("'expiration' is before current time"), "expiration") def _validate_content_parameters(self, content_type, content_encoding, schema_name): """Content parameter validator. Check that the content_type, content_encoding and the parameters that they affect are valid. """ self._assert_validity( content_type is not None, schema_name, u._("If 'payload' is supplied, 'payload_content_type' must also " "be supplied."), "payload_content_type") self._assert_validity( mime_types.is_supported(content_type), schema_name, u._("payload_content_type is not one of {supported}" ).format(supported=mime_types.SUPPORTED), "payload_content_type") self._assert_validity( mime_types.is_content_type_with_encoding_supported( content_type, content_encoding), schema_name, u._("payload_content_encoding is not one of {supported}").format( supported=mime_types.get_supported_encodings(content_type)), "payload_content_encoding") def _validate_payload_by_content_encoding(self, payload_content_encoding, payload, schema_name): if payload_content_encoding == 'base64': try: base64.b64decode(payload) except Exception: LOG.exception("Problem parsing payload") raise exception.InvalidObject( schema=schema_name, reason=u._("Invalid payload for payload_content_encoding"), property="payload") def _extract_payload(self, json_data): """Extracts and returns the payload from the JSON data. :raises: LimitExceeded if the payload is too big """ payload = json_data.get('payload', '') if secret_too_big(payload): raise exception.LimitExceeded() return payload.strip() class NewSecretMetadataValidator(ValidatorBase): """Validate new secret metadata.""" def __init__(self): self.name = 'SecretMetadata' self.schema = { "type": "object", "$schema": "http://json-schema.org/draft-03/schema", "properties": { "metadata": {"type": "object", "required": True}, } } def validate(self, json_data, parent_schema=None): """Validate the input JSON for the schema for secret metadata.""" schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) return self._extract_metadata(json_data) def _extract_metadata(self, json_data): """Extracts and returns the metadata from the JSON data.""" metadata = json_data['metadata'] for key in metadata: # make sure key is a string and url-safe. if not isinstance(key, six.string_types): raise exception.InvalidMetadataRequest() self._check_string_url_safe(key) # make sure value is a string. value = metadata[key] if not isinstance(value, six.string_types): raise exception.InvalidMetadataRequest() # If key is not lowercase, then change it if not key.islower(): del metadata[key] metadata[key.lower()] = value return metadata def _check_string_url_safe(self, string): """Checks if string can be part of a URL.""" if not re.match("^[A-Za-z0-9_-]*$", string): raise exception.InvalidMetadataKey() class NewSecretMetadatumValidator(ValidatorBase): """Validate new secret metadatum.""" def __init__(self): self.name = 'SecretMetadatum' self.schema = { "type": "object", "$schema": "http://json-schema.org/draft-03/schema", "properties": { "key": { "type": "string", "maxLength": 255, "required": True }, "value": { "type": "string", "maxLength": 255, "required": True }, }, "additionalProperties": False } def validate(self, json_data, parent_schema=None): """Validate the input JSON for the schema for secret metadata.""" schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) key = self._extract_key(json_data) value = self._extract_value(json_data) return {"key": key, "value": value} def _extract_key(self, json_data): """Extracts and returns the metadata from the JSON data.""" key = json_data['key'] self._check_string_url_safe(key) key = key.lower() return key def _extract_value(self, json_data): """Extracts and returns the metadata from the JSON data.""" value = json_data['value'] return value def _check_string_url_safe(self, string): """Checks if string can be part of a URL.""" if not re.match("^[A-Za-z0-9_-]*$", string): raise exception.InvalidMetadataKey() class CACommonHelpersMixin(object): def _validate_subject_dn_data(self, subject_dn): """Confirm that the subject_dn contains valid data Validate that the subject_dn string parses without error If not, raise InvalidSubjectDN """ try: parse_dn(subject_dn) except ldap_exceptions.LDAPInvalidDnError: raise exception.InvalidSubjectDN(subject_dn=subject_dn) # TODO(atiwari) - Split this validator module and unit tests # into smaller modules class TypeOrderValidator(ValidatorBase, CACommonHelpersMixin): """Validate a new typed order.""" def __init__(self): self.name = 'Order' self.schema = { "type": "object", "$schema": "http://json-schema.org/draft-03/schema", "properties": { "meta": { "type": "object", "required": True }, "type": { "type": "string", "required": True, "enum": ['key', 'asymmetric', 'certificate'] } } } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) order_type = json_data.get('type').lower() if order_type == models.OrderType.CERTIFICATE: certificate_meta = json_data.get('meta') self._validate_certificate_meta(certificate_meta, schema_name) elif order_type == models.OrderType.ASYMMETRIC: asymmetric_meta = json_data.get('meta') self._validate_asymmetric_meta(asymmetric_meta, schema_name) elif order_type == models.OrderType.KEY: key_meta = json_data.get('meta') self._validate_key_meta(key_meta, schema_name) else: self._raise_feature_not_implemented(order_type, schema_name) return json_data def _validate_key_meta(self, key_meta, schema_name): """Validation specific to meta for key type order.""" secret_validator = NewSecretValidator() secret_validator.validate(key_meta, parent_schema=self.name) self._assert_validity(key_meta.get('payload') is None, schema_name, u._("'payload' not allowed " "for key type order"), "meta") # Validation secret generation related fields. # TODO(jfwood): Invoke the crypto plugin for this purpose self._validate_meta_parameters(key_meta, "key", schema_name) def _validate_asymmetric_meta(self, asymmetric_meta, schema_name): """Validation specific to meta for asymmetric type order.""" # Validate secret metadata. secret_validator = NewSecretValidator() secret_validator.validate(asymmetric_meta, parent_schema=self.name) self._assert_validity(asymmetric_meta.get('payload') is None, schema_name, u._("'payload' not allowed " "for asymmetric type order"), "meta") self._validate_meta_parameters(asymmetric_meta, "asymmetric key", schema_name) def _get_required_metadata_value(self, metadata, key): data = metadata.get(key, None) if data is None: raise exception.MissingMetadataField(required=key) return data def _validate_certificate_meta(self, certificate_meta, schema_name): """Validation specific to meta for certificate type order.""" self._assert_validity(certificate_meta.get('payload') is None, schema_name, u._("'payload' not allowed " "for certificate type order"), "meta") if 'profile' in certificate_meta: if 'ca_id' not in certificate_meta: raise exception.MissingMetadataField(required='ca_id') jump_table = { 'simple-cmc': self._validate_simple_cmc_request, 'full-cmc': self._validate_full_cmc_request, 'stored-key': self._validate_stored_key_request, 'custom': self._validate_custom_request } request_type = certificate_meta.get("request_type", "custom") if request_type not in jump_table: raise exception.InvalidCertificateRequestType(request_type) jump_table[request_type](certificate_meta) def _validate_simple_cmc_request(self, certificate_meta): """Validates simple CMC (which are PKCS10 requests).""" request_data = self._get_required_metadata_value( certificate_meta, "request_data") self._validate_pkcs10_data(request_data) def _validate_full_cmc_request(self, certificate_meta): """Validate full CMC request. :param certificate_meta: request data from the order :raises: FullCMCNotSupported """ raise exception.FullCMCNotSupported() def _validate_stored_key_request(self, certificate_meta): """Validate stored-key cert request.""" self._get_required_metadata_value( certificate_meta, "container_ref") subject_dn = self._get_required_metadata_value( certificate_meta, "subject_dn") self._validate_subject_dn_data(subject_dn) # container will be validated by validate_stored_key_rsa_container() extensions = certificate_meta.get("extensions", None) if extensions: self._validate_extensions_data(extensions) def _validate_custom_request(self, certificate_meta): """Validate custom data request We cannot do any validation here because the request parameters are custom. Validation will be done by the plugin. We may choose to select the relevant plugin and call the supports() method to raise validation errors. """ pass def _validate_pkcs10_data(self, request_data): """Confirm that the request_data is valid base64 encoded PKCS#10. Base64 decode the request, if it fails raise PayloadDecodingError. Then parse data into the ASN.1 structure defined by PKCS10 and verify the signing information. If parsing of verifying fails, raise InvalidPKCS10Data. """ try: csr_pem = base64.b64decode(request_data) except Exception: raise exception.PayloadDecodingError() try: csr = crypto.load_certificate_request(crypto.FILETYPE_PEM, csr_pem) except Exception: reason = u._("Bad format") raise exception.InvalidPKCS10Data(reason=reason) try: pubkey = csr.get_pubkey() csr.verify(pubkey) except Exception: reason = u._("Signing key incorrect") raise exception.InvalidPKCS10Data(reason=reason) def _validate_full_cmc_data(self, request_data): """Confirm that request_data is valid Full CMC data.""" """ TODO(alee-3) complete this function Parse data into the ASN.1 structure defined for full CMC. If parsing fails, raise InvalidCMCData """ pass def _validate_extensions_data(self, extensions): """Confirm that the extensions data is valid. :param extensions: base 64 encoded ASN.1 string of extension data :raises: CertificateExtensionsNotSupported """ """ TODO(alee-3) complete this function Parse the extensions data into the correct ASN.1 structure. If the parsing fails, throw InvalidExtensionsData. For now, fail this validation because extensions parsing is not supported. """ raise exception.CertificateExtensionsNotSupported() def _validate_meta_parameters(self, meta, order_type, schema_name): self._assert_validity(meta.get('algorithm'), schema_name, u._("'algorithm' is required field " "for {0} type order").format(order_type), "meta") self._assert_validity(meta.get('bit_length'), schema_name, u._("'bit_length' is required field " "for {0} type order").format(order_type), "meta") self._validate_bit_length(meta, schema_name) def _extract_expiration(self, json_data, schema_name): """Extracts and returns the expiration date from the JSON data.""" expiration = None expiration_raw = json_data.get('expiration', None) if expiration_raw and expiration_raw.strip(): try: expiration_tz = timeutils.parse_isotime(expiration_raw) expiration = timeutils.normalize_time(expiration_tz) except ValueError: LOG.exception("Problem parsing expiration date") raise exception.InvalidObject(schema=schema_name, reason=u._("Invalid date " "for 'expiration'"), property="expiration") return expiration def _validate_bit_length(self, meta, schema_name): bit_length = int(meta.get('bit_length')) if bit_length % 8 != 0: raise exception.UnsupportedField(field="bit_length", schema=schema_name, reason=u._("Must be a" " positive integer" " that is a" " multiple of 8")) def _raise_feature_not_implemented(self, order_type, schema_name): raise exception.FeatureNotImplemented(field='type', schema=schema_name, reason=u._("Feature not " "implemented for " "'{0}' order type") .format(order_type)) class ACLValidator(ValidatorBase): """Validate ACL(s).""" def __init__(self): self.name = 'ACL' self.schema = { "$schema": "http://json-schema.org/draft-04/schema#", "definitions": { "acl_defintion": { "type": "object", "properties": { "users": { "type": "array", "items": [ {"type": "string", "maxLength": 255} ] }, "project-access": {"type": "boolean"} }, "additionalProperties": False } }, "type": "object", "properties": { "read": {"$ref": "#/definitions/acl_defintion"}, }, "additionalProperties": False } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) return json_data class ContainerConsumerValidator(ValidatorBase): """Validate a Consumer.""" def __init__(self): self.name = 'Consumer' self.schema = { "type": "object", "properties": { "URL": {"type": "string", "minLength": 1}, "name": {"type": "string", "maxLength": 255, "minLength": 1} }, "required": ["name", "URL"] } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) return json_data class ContainerSecretValidator(ValidatorBase): """Validate a Container Secret.""" def __init__(self): self.name = 'ContainerSecret' self.schema = { "type": "object", "properties": { "name": {"type": "string", "maxLength": 255}, "secret_ref": {"type": "string", "minLength": 1} }, "required": ["secret_ref"] } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) return json_data class ContainerValidator(ValidatorBase): """Validator for all types of Container.""" def __init__(self): self.name = 'Container' self.schema = { "type": "object", "properties": { "name": {"type": ["string", "null"], "maxLength": 255}, "type": { "type": "string", # TODO(hgedikli): move this to a common location "enum": ["generic", "rsa", "certificate"] }, "secret_refs": { "type": "array", "items": { "type": "object", "required": ["secret_ref"], "properties": { "name": { "type": ["string", "null"], "maxLength": 255 }, "secret_ref": {"type": "string", "minLength": 1} } } } }, "required": ["type"] } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) container_type = json_data.get('type') secret_refs = json_data.get('secret_refs') if not secret_refs: return json_data secret_refs_names = set(secret_ref.get('name', '') for secret_ref in secret_refs) self._assert_validity( len(secret_refs_names) == len(secret_refs), schema_name, u._("Duplicate reference names are not allowed"), "secret_refs") # The combination of container_id and secret_id is expected to be # primary key for container_secret so same secret id (ref) cannot be # used within a container secret_ids = set(self._get_secret_id_from_ref(secret_ref) for secret_ref in secret_refs) self._assert_validity( len(secret_ids) == len(secret_refs), schema_name, u._("Duplicate secret ids are not allowed"), "secret_refs") # Ensure that our secret refs are valid relative to our config, no # spoofing allowed! req_host_href = utils.get_base_url_from_request() for secret_ref in secret_refs: if not secret_ref.get('secret_ref').startswith(req_host_href): raise exception.UnsupportedField( field='secret_ref', schema=schema_name, reason=u._( "Secret_ref does not match the configured hostname, " "please try again" ) ) if container_type == 'rsa': self._validate_rsa(secret_refs_names, schema_name) elif container_type == 'certificate': self._validate_certificate(secret_refs_names, schema_name) return json_data def _validate_rsa(self, secret_refs_names, schema_name): required_names = {'public_key', 'private_key'} optional_names = {'private_key_passphrase'} contains_unsupported_names = self._contains_unsupported_names( secret_refs_names, required_names | optional_names) self._assert_validity( not contains_unsupported_names, schema_name, u._("only 'private_key', 'public_key' and " "'private_key_passphrase' reference names are " "allowed for RSA type"), "secret_refs") self._assert_validity( self._has_minimum_required(secret_refs_names, required_names), schema_name, u._("The minimum required reference names are 'public_key' and" "'private_key' for RSA type"), "secret_refs") def _validate_certificate(self, secret_refs_names, schema_name): required_names = {'certificate'} optional_names = {'private_key', 'private_key_passphrase', 'intermediates'} contains_unsupported_names = self._contains_unsupported_names( secret_refs_names, required_names.union(optional_names)) self._assert_validity( not contains_unsupported_names, schema_name, u._("only 'private_key', 'certificate' , " "'private_key_passphrase', or 'intermediates' " "reference names are allowed for Certificate type"), "secret_refs") self._assert_validity( self._has_minimum_required(secret_refs_names, required_names), schema_name, u._("The minimum required reference name is 'certificate' " "for Certificate type"), "secret_refs") def _contains_unsupported_names(self, secret_refs_names, supported_names): if secret_refs_names.difference(supported_names): return True return False def _has_minimum_required(self, secret_refs_names, required_names): if required_names.issubset(secret_refs_names): return True return False def _get_secret_id_from_ref(self, secret_ref): secret_id = secret_ref.get('secret_ref') if secret_id.endswith('/'): secret_id = secret_id.rsplit('/', 2)[1] elif '/' in secret_id: secret_id = secret_id.rsplit('/', 1)[1] return secret_id class NewTransportKeyValidator(ValidatorBase): """Validate a new transport key.""" def __init__(self): self.name = 'Transport Key' self.schema = { "type": "object", "properties": { "plugin_name": {"type": "string"}, "transport_key": {"type": "string"}, }, } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) plugin_name = json_data.get('plugin_name', '').strip() self._assert_validity(plugin_name, schema_name, u._("plugin_name must be provided"), "plugin_name") json_data['plugin_name'] = plugin_name transport_key = json_data.get('transport_key', '').strip() self._assert_validity(transport_key, schema_name, u._("transport_key must be provided"), "transport_key") json_data['transport_key'] = transport_key return json_data class ProjectQuotaValidator(ValidatorBase): """Validate a new project quota.""" def __init__(self): self.name = 'Project Quota' self.schema = { 'type': 'object', 'properties': { 'project_quotas': { 'type': 'object', 'properties': { 'secrets': {'type': 'integer'}, 'orders': {'type': 'integer'}, 'containers': {'type': 'integer'}, 'consumers': {'type': 'integer'}, 'cas': {'type': 'integer'} }, 'additionalProperties': False, } }, 'required': ['project_quotas'], 'additionalProperties': False } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) return json_data class NewCAValidator(ValidatorBase, CACommonHelpersMixin): """Validate new CA(s).""" def __init__(self): self.name = 'CA' self.schema = { 'type': 'object', 'properties': { 'name': {'type': 'string', "minLength": 1}, 'subject_dn': {'type': 'string', "minLength": 1}, 'parent_ca_ref': {'type': 'string', "minLength": 1}, 'description': {'type': 'string'}, }, 'required': ['name', 'subject_dn', 'parent_ca_ref'], 'additionalProperties': False } def validate(self, json_data, parent_schema=None): schema_name = self._full_name(parent_schema) self._assert_schema_is_valid(json_data, schema_name) subject_dn = json_data['subject_dn'] self._validate_subject_dn_data(subject_dn) return json_data barbican-6.0.0/barbican/common/hrefs.py0000666000175100017510000001371613245511001020016 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from barbican.common import utils def convert_resource_id_to_href(resource_slug, resource_id): """Convert the resource ID to a HATEOAS-style href with resource slug.""" if resource_id: resource = '{slug}/{id}'.format(slug=resource_slug, id=resource_id) else: resource = '{slug}/????'.format(slug=resource_slug) return utils.hostname_for_refs(resource=resource) def convert_secret_to_href(secret_id): """Convert the secret IDs to a HATEOAS-style href.""" return convert_resource_id_to_href('secrets', secret_id) def convert_order_to_href(order_id): """Convert the order IDs to a HATEOAS-style href.""" return convert_resource_id_to_href('orders', order_id) def convert_container_to_href(container_id): """Convert the container IDs to a HATEOAS-style href.""" return convert_resource_id_to_href('containers', container_id) def convert_transport_key_to_href(transport_key_id): """Convert the transport key IDs to a HATEOAS-style href.""" return convert_resource_id_to_href('transport_keys', transport_key_id) def convert_consumer_to_href(consumer_id): """Convert the consumer ID to a HATEOAS-style href.""" return convert_resource_id_to_href('consumers', consumer_id) + '/consumers' def convert_user_meta_to_href(secret_id): """Convert the consumer ID to a HATEOAS-style href.""" return convert_resource_id_to_href('secrets', secret_id) + '/metadata' def convert_certificate_authority_to_href(ca_id): """Convert the ca ID to a HATEOAS-style href.""" return convert_resource_id_to_href('cas', ca_id) def convert_secret_stores_to_href(secret_store_id): """Convert the secret-store ID to a HATEOAS-style href.""" return convert_resource_id_to_href('secret-stores', secret_store_id) # TODO(hgedikli) handle list of fields in here def convert_to_hrefs(fields): """Convert id's within a fields dict to HATEOAS-style hrefs.""" if 'secret_id' in fields: fields['secret_ref'] = convert_secret_to_href(fields['secret_id']) del fields['secret_id'] if 'order_id' in fields: fields['order_ref'] = convert_order_to_href(fields['order_id']) del fields['order_id'] if 'container_id' in fields: fields['container_ref'] = convert_container_to_href( fields['container_id']) del fields['container_id'] if 'transport_key_id' in fields: fields['transport_key_ref'] = convert_transport_key_to_href( fields['transport_key_id']) del fields['transport_key_id'] return fields def convert_list_to_href(resources_name, offset, limit): """Supports pretty output of paged-list hrefs. Convert the offset/limit info to a HATEOAS-style href suitable for use in a list navigation paging interface. """ resource = '{0}?limit={1}&offset={2}'.format(resources_name, limit, offset) return utils.hostname_for_refs(resource=resource) def previous_href(resources_name, offset, limit): """Supports pretty output of previous-page hrefs. Create a HATEOAS-style 'previous' href suitable for use in a list navigation paging interface, assuming the provided values are the currently viewed page. """ offset = max(0, offset - limit) return convert_list_to_href(resources_name, offset, limit) def next_href(resources_name, offset, limit): """Supports pretty output of next-page hrefs. Create a HATEOAS-style 'next' href suitable for use in a list navigation paging interface, assuming the provided values are the currently viewed page. """ offset = offset + limit return convert_list_to_href(resources_name, offset, limit) def add_nav_hrefs(resources_name, offset, limit, total_elements, data): """Adds next and/or previous hrefs to paged list responses. :param resources_name: Name of api resource :param offset: Element number (ie. index) where current page starts :param limit: Max amount of elements listed on current page :param total_elements: Total number of elements :returns: augmented dictionary with next and/or previous hrefs """ if offset > 0: data.update({'previous': previous_href(resources_name, offset, limit)}) if total_elements > (offset + limit): data.update({'next': next_href(resources_name, offset, limit)}) return data def get_container_id_from_ref(container_ref): """Parse a container reference and return the container ID TODO(Dave) Add some extra checking for valid prefix The container ID is the right-most element of the URL :param container_ref: HTTP reference of container :return: a string containing the ID of the container """ container_id = container_ref.rsplit('/', 1)[1] return container_id def get_secret_id_from_ref(secret_ref): """Parse a secret reference and return the secret ID :param secret_ref: HTTP reference of secret :return: a string containing the ID of the secret """ secret_id = secret_ref.rsplit('/', 1)[1] return secret_id def get_ca_id_from_ref(ca_ref): """Parse a ca_ref and return the CA ID :param ca_ref: HHTO reference of the CA :return: a string containing the ID of the CA """ ca_id = ca_ref.rsplit('/', 1)[1] return ca_id barbican-6.0.0/barbican/common/exception.py0000666000175100017510000003235213245511001020702 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Barbican exception subclasses """ from barbican import i18n as u _FATAL_EXCEPTION_FORMAT_ERRORS = False class BarbicanException(Exception): """Base Barbican Exception To correctly use this class, inherit from it and define a 'message' property. That message will get printf'd with the keyword arguments provided to the constructor. """ message = u._("An unknown exception occurred") def __init__(self, message_arg=None, *args, **kwargs): if not message_arg: message_arg = self.message try: self.message = message_arg % kwargs except Exception as e: if _FATAL_EXCEPTION_FORMAT_ERRORS: raise e else: # at least get the core message out if something happened pass super(BarbicanException, self).__init__(self.message) class BarbicanHTTPException(BarbicanException): """Base Barbican Exception to handle HTTP responses To correctly use this class, inherit from it and define the following properties: - message: The message that will be displayed in the server log. - client_message: The message that will actually be outputted to the client. - status_code: The HTTP status code that should be returned. The default status code is 500. """ client_message = u._("failure seen - please contact site administrator.") status_code = 500 def __init__(self, message_arg=None, client_message=None, *args, **kwargs): if not client_message: client_message = self.client_message try: self.client_message = client_message % kwargs except Exception as e: if _FATAL_EXCEPTION_FORMAT_ERRORS: raise e else: # at least get the core message out if something happened pass super(BarbicanHTTPException, self).__init__( message_arg, self.client_message, *args, **kwargs) class MissingArgumentError(BarbicanException): message = u._("Missing required argument.") class MissingMetadataField(BarbicanHTTPException): message = u._("Missing required metadata field for %(required)s") client_message = message status_code = 400 class InvalidMetadataRequest(BarbicanHTTPException): message = u._("Invalid Metadata. Keys and Values must be Strings.") client_message = message status_code = 400 class InvalidMetadataKey(BarbicanHTTPException): message = u._("Invalid Key. Key must be URL safe.") client_message = message status_code = 400 class InvalidSubjectDN(BarbicanHTTPException): message = u._("Invalid subject DN: %(subject_dn)s") client_message = message status_code = 400 class InvalidContainer(BarbicanHTTPException): message = u._("Invalid container: %(reason)s") client_message = message status_code = 400 class InvalidExtensionsData(BarbicanHTTPException): message = u._("Invalid extensions data.") client_message = message status_code = 400 class InvalidCMCData(BarbicanHTTPException): message = u._("Invalid CMC Data") client_message = message status_code = 400 class InvalidPKCS10Data(BarbicanHTTPException): message = u._("Invalid PKCS10 Data: %(reason)s") client_message = message status_code = 400 class InvalidCertificateRequestType(BarbicanHTTPException): message = u._("Invalid Certificate Request Type") client_message = message status_code = 400 class CertificateExtensionsNotSupported(BarbicanHTTPException): message = u._("Extensions are not yet supported. " "Specify a valid profile instead.") client_message = message status_code = 400 class FullCMCNotSupported(BarbicanHTTPException): message = u._("Full CMC Requests are not yet supported.") client_message = message status_code = 400 class NotFound(BarbicanException): message = u._("An object with the specified identifier was not found.") class ConstraintCheck(BarbicanException): message = u._("A defined SQL constraint check failed: %(error)s") class NotSupported(BarbicanException): message = u._("Operation is not supported.") class Invalid(BarbicanException): message = u._("Data supplied was not valid.") class NoDataToProcess(BarbicanHTTPException): message = u._("No data supplied to process.") client_message = message status_code = 400 class LimitExceeded(BarbicanHTTPException): message = u._("The request returned a 413 Request Entity Too Large. This " "generally means that rate limiting or a quota threshold " "was breached.") client_message = u._("Provided information too large to process") status_code = 413 def __init__(self, *args, **kwargs): super(LimitExceeded, self).__init__(*args, **kwargs) self.retry_after = (int(kwargs['retry']) if kwargs.get('retry') else None) class InvalidObject(BarbicanHTTPException): status_code = 400 def __init__(self, *args, **kwargs): self.invalid_property = kwargs.get('property') self.message = u._("Failed to validate JSON information: ") self.client_message = u._("Provided object does not match " "schema '{schema}': " "{reason}. Invalid property: " "'{property}'").format(*args, **kwargs) self.message = self.message + self.client_message super(InvalidObject, self).__init__(*args, **kwargs) class PayloadDecodingError(BarbicanHTTPException): status_code = 400 message = u._("Error while attempting to decode payload.") client_message = u._("Unable to decode request data.") class UnsupportedField(BarbicanHTTPException): message = u._("No support for value set on field '%(field)s' on " "schema '%(schema)s': %(reason)s") client_message = u._("Provided field value is not supported") status_code = 400 def __init__(self, *args, **kwargs): super(UnsupportedField, self).__init__(*args, **kwargs) self.invalid_field = kwargs.get('field') class FeatureNotImplemented(BarbicanException): message = u._("Feature not implemented for value set on field " "'%(field)s' on " "schema '%(schema)s': %(reason)s") def __init__(self, *args, **kwargs): super(FeatureNotImplemented, self).__init__(*args, **kwargs) self.invalid_field = kwargs.get('field') class StoredKeyContainerNotFound(BarbicanException): message = u._("Container %(container_id)s does not exist for stored " "key certificate generation.") class StoredKeyPrivateKeyNotFound(BarbicanException): message = u._("Container %(container_id)s does not reference a private " "key needed for stored key certificate generation.") class ProvidedTransportKeyNotFound(BarbicanHTTPException): message = u._("Provided Transport key %(transport_key_id)s " "could not be found") client_message = u._("Provided transport key was not found.") status_code = 400 class InvalidCAID(BarbicanHTTPException): message = u._("Invalid CA_ID: %(ca_id)s") client_message = u._("The ca_id provided in the request is invalid") status_code = 400 class CANotDefinedForProject(BarbicanHTTPException): message = u._("CA specified by ca_id %(ca_id)s not defined for project: " "%(project_id)s") client_message = u._("The ca_id provided in the request is not defined " "for this project") status_code = 403 class QuotaReached(BarbicanHTTPException): message = u._("Quota reached for project %(external_project_id)s. Only " "%(quota)s %(resource_type)s are allowed.") client_message = u._("Creation not allowed because a quota has " "been reached") status_code = 403 def __init__(self, *args, **kwargs): super(QuotaReached, self).__init__(*args, **kwargs) self.external_project_id = kwargs.get('external_project_id') self.quota = kwargs.get('quota') self.resource_type = kwargs.get('resource_type') class InvalidParentCA(BarbicanHTTPException): message = u._("Invalid Parent CA: %(parent_ca_ref)s") client_message = message status_code = 400 class SubCAsNotSupported(BarbicanHTTPException): message = u._("Plugin does not support generation of subordinate CAs") client_message = message status_code = 400 class SubCANotCreated(BarbicanHTTPException): message = u._("Errors in creating subordinate CA: %(name)") client_message = message class CannotDeleteBaseCA(BarbicanHTTPException): message = u._("Only subordinate CAs can be deleted.") status_code = 403 class UnauthorizedSubCA(BarbicanHTTPException): message = u._("Subordinate CA is not owned by this project") client_message = message status_code = 403 class CannotDeletePreferredCA(BarbicanHTTPException): message = u._("A new project preferred CA must be set " "before this one can be deleted.") status_code = 409 class BadSubCACreationRequest(BarbicanHTTPException): message = u._("Errors returned by CA when attempting to " "create subordinate CA: %(reason)s") client_message = message status_code = 400 class SubCACreationErrors(BarbicanHTTPException): message = u._("Errors returned by CA when attempting to create " "subordinate CA: %(reason)s") client_message = message class SubCADeletionErrors(BarbicanHTTPException): message = u._("Errors returned by CA when attempting to delete " "subordinate CA: %(reason)s") client_message = message class PKCS11Exception(BarbicanException): message = u._("There was an error with the PKCS#11 library.") class P11CryptoPluginKeyException(PKCS11Exception): message = u._("More than one key found for label") class P11CryptoPluginException(PKCS11Exception): message = u._("General exception") class P11CryptoKeyHandleException(PKCS11Exception): message = u._("No key handle was found") class P11CryptoTokenException(PKCS11Exception): message = u._("No token was found in slot %(slot_id)s") class MultipleStorePreferredPluginMissing(BarbicanException): """Raised when a preferred plugin is missing in service configuration.""" def __init__(self, store_name): super(MultipleStorePreferredPluginMissing, self).__init__( u._("Preferred Secret Store plugin '{store_name}' is not " "currently set in service configuration. This is probably a " "server misconfiguration.").format( store_name=store_name) ) self.store_name = store_name class MultipleStorePluginStillInUse(BarbicanException): """Raised when a used plugin is missing in service configuration.""" def __init__(self, store_name): super(MultipleStorePluginStillInUse, self).__init__( u._("Secret Store plugin '{store_name}' is still in use and can " "not be removed. Its missing in service configuration. This is" " probably a server misconfiguration.").format( store_name=store_name) ) self.store_name = store_name class MultipleSecretStoreLookupFailed(BarbicanException): """Raised when a plugin lookup suffix is missing during config read.""" def __init__(self): msg = u._("Plugin lookup property 'stores_lookup_suffix' is not " "defined in service configuration") super(MultipleSecretStoreLookupFailed, self).__init__(msg) class MultipleStoreIncorrectGlobalDefault(BarbicanException): """Raised when a global default for only one plugin is not set to True.""" def __init__(self, occurrence): msg = None if occurrence > 1: msg = u._("There are {count} plugins with global default as " "True in service configuration. Only one plugin can have" " this as True").format(count=occurrence) else: msg = u._("There is no plugin defined with global default as True." " One of plugin must be identified as global default") super(MultipleStoreIncorrectGlobalDefault, self).__init__(msg) class MultipleStorePluginValueMissing(BarbicanException): """Raised when a store plugin value is missing in service configuration.""" def __init__(self, section_name): super(MultipleStorePluginValueMissing, self).__init__( u._("In section '{0}', secret_store_plugin value is missing" ).format(section_name) ) self.section_name = section_name barbican-6.0.0/barbican/common/policies/0000775000175100017510000000000013245511177020150 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/common/policies/quotas.py0000666000175100017510000000174013245511001022024 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('quotas:get', 'rule:all_users'), policy.RuleDefault('project_quotas:get', 'rule:service_admin'), policy.RuleDefault('project_quotas:put', 'rule:service_admin'), policy.RuleDefault('project_quotas:delete', 'rule:service_admin'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/cas.py0000666000175100017510000000423313245511001021256 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('certificate_authorities:get_limited', 'rule:all_users'), policy.RuleDefault('certificate_authorities:get_all', 'rule:admin'), policy.RuleDefault('certificate_authorities:post', 'rule:admin'), policy.RuleDefault('certificate_authorities:get_preferred_ca', 'rule:all_users'), policy.RuleDefault('certificate_authorities:get_global_preferred_ca', 'rule:service_admin'), policy.RuleDefault('certificate_authorities:unset_global_preferred', 'rule:service_admin'), policy.RuleDefault('certificate_authority:delete', 'rule:admin'), policy.RuleDefault('certificate_authority:get', 'rule:all_users'), policy.RuleDefault('certificate_authority:get_cacert', 'rule:all_users'), policy.RuleDefault('certificate_authority:get_ca_cert_chain', 'rule:all_users'), policy.RuleDefault('certificate_authority:get_projects', 'rule:service_admin'), policy.RuleDefault('certificate_authority:add_to_project', 'rule:admin'), policy.RuleDefault('certificate_authority:remove_from_project', 'rule:admin'), policy.RuleDefault('certificate_authority:set_preferred', 'rule:admin'), policy.RuleDefault('certificate_authority:set_global_preferred', 'rule:service_admin'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/base.py0000666000175100017510000000712513245511001021425 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('admin', 'role:admin'), policy.RuleDefault('observer', 'role:observer'), policy.RuleDefault('creator', 'role:creator'), policy.RuleDefault('audit', 'role:audit'), policy.RuleDefault('service_admin', 'role:key-manager:service-admin'), policy.RuleDefault('admin_or_user_does_not_work', 'project_id:%(project_id)s'), policy.RuleDefault('admin_or_user', 'rule:admin or project_id:%(project_id)s'), policy.RuleDefault('admin_or_creator', 'rule:admin or rule:creator'), policy.RuleDefault('all_but_audit', 'rule:admin or rule:observer or rule:creator'), policy.RuleDefault('all_users', 'rule:admin or rule:observer or rule:creator or ' 'rule:audit or rule:service_admin'), policy.RuleDefault('secret_project_match', 'project:%(target.secret.project_id)s'), policy.RuleDefault('secret_acl_read', "'read':%(target.secret.read)s"), policy.RuleDefault('secret_private_read', "'False':%(target.secret.read_project_access)s"), policy.RuleDefault('secret_creator_user', "user:%(target.secret.creator_id)s"), policy.RuleDefault('container_project_match', "project:%(target.container.project_id)s"), policy.RuleDefault('container_acl_read', "'read':%(target.container.read)s"), policy.RuleDefault('container_private_read', "'False':%(target.container.read_project_access)s"), policy.RuleDefault('container_creator_user', "user:%(target.container.creator_id)s"), policy.RuleDefault('secret_non_private_read', "rule:all_users and rule:secret_project_match and " "not rule:secret_private_read"), policy.RuleDefault('secret_decrypt_non_private_read', "rule:all_but_audit and rule:secret_project_match and " "not rule:secret_private_read"), policy.RuleDefault('container_non_private_read', "rule:all_users and rule:container_project_match and " "not rule:container_private_read"), policy.RuleDefault('secret_project_admin', "rule:admin and rule:secret_project_match"), policy.RuleDefault('secret_project_creator', "rule:creator and rule:secret_project_match and " "rule:secret_creator_user"), policy.RuleDefault('container_project_admin', "rule:admin and rule:container_project_match"), policy.RuleDefault('container_project_creator', "rule:creator and rule:container_project_match and " "rule:container_creator_user"), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/transportkeys.py0000666000175100017510000000172313245511001023441 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('transport_key:get', 'rule:all_users'), policy.RuleDefault('transport_key:delete', 'rule:admin'), policy.RuleDefault('transport_keys:get', 'rule:all_users'), policy.RuleDefault('transport_keys:post', 'rule:admin'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/acls.py0000666000175100017510000000301513245511001021427 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('secret_acls:put_patch', 'rule:secret_project_admin or ' 'rule:secret_project_creator'), policy.RuleDefault('secret_acls:delete', 'rule:secret_project_admin or ' 'rule:secret_project_creator'), policy.RuleDefault('secret_acls:get', 'rule:all_but_audit and ' 'rule:secret_project_match'), policy.RuleDefault('container_acls:put_patch', 'rule:container_project_admin or ' 'rule:container_project_creator'), policy.RuleDefault('container_acls:delete', 'rule:container_project_admin or ' 'rule:container_project_creator'), policy.RuleDefault('container_acls:get', 'rule:all_but_audit and rule:container_project_match'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/containers.py0000666000175100017510000000261213245511001022654 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('containers:post', 'rule:admin_or_creator'), policy.RuleDefault('containers:get', 'rule:all_but_audit'), policy.RuleDefault('container:get', 'rule:container_non_private_read or ' 'rule:container_project_creator or ' 'rule:container_project_admin or ' 'rule:container_acl_read'), policy.RuleDefault('container:delete', 'rule:container_project_admin or ' 'rule:container_project_creator'), policy.RuleDefault('container_secret:post', 'rule:admin'), policy.RuleDefault('container_secret:delete', 'rule:admin'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/__init__.py0000666000175100017510000000267213245511001022254 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from barbican.common.policies import acls from barbican.common.policies import base from barbican.common.policies import cas from barbican.common.policies import consumers from barbican.common.policies import containers from barbican.common.policies import orders from barbican.common.policies import quotas from barbican.common.policies import secretmeta from barbican.common.policies import secrets from barbican.common.policies import secretstores from barbican.common.policies import transportkeys def list_rules(): return itertools.chain( base.list_rules(), acls.list_rules(), cas.list_rules(), consumers.list_rules(), containers.list_rules(), orders.list_rules(), quotas.list_rules(), secretmeta.list_rules(), secrets.list_rules(), secretstores.list_rules(), transportkeys.list_rules(), ) barbican-6.0.0/barbican/common/policies/secretstores.py0000666000175100017510000000222113245511001023230 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('secretstores:get', 'rule:admin'), policy.RuleDefault('secretstores:get_global_default', 'rule:admin'), policy.RuleDefault('secretstores:get_preferred', 'rule:admin'), policy.RuleDefault('secretstore_preferred:post', 'rule:admin'), policy.RuleDefault('secretstore_preferred:delete', 'rule:admin'), policy.RuleDefault('secretstore:get', 'rule:admin'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/orders.py0000666000175100017510000000202713245511001022005 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('orders:post', 'rule:admin_or_creator'), policy.RuleDefault('orders:get', 'rule:all_but_audit'), policy.RuleDefault('order:get', 'rule:all_users'), policy.RuleDefault('order:put', 'rule:admin_or_creator'), policy.RuleDefault('order:delete', 'rule:admin'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/secretmeta.py0000666000175100017510000000175213245511001022647 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('secret_meta:get', 'rule:all_but_audit'), policy.RuleDefault('secret_meta:post', 'rule:admin_or_creator'), policy.RuleDefault('secret_meta:put', 'rule:admin_or_creator'), policy.RuleDefault('secret_meta:delete', 'rule:admin_or_creator'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/secrets.py0000666000175100017510000000312713245511001022161 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('secret:decrypt', 'rule:secret_decrypt_non_private_read or ' 'rule:secret_project_creator or ' 'rule:secret_project_admin or ' 'rule:secret_acl_read'), policy.RuleDefault('secret:get', 'rule:secret_non_private_read or ' 'rule:secret_project_creator or ' 'rule:secret_project_admin or ' 'rule:secret_acl_read'), policy.RuleDefault('secret:put', 'rule:admin_or_creator and ' 'rule:secret_project_match'), policy.RuleDefault('secret:delete', 'rule:secret_project_admin or ' 'rule:secret_project_creator'), policy.RuleDefault('secrets:post', 'rule:admin_or_creator'), policy.RuleDefault('secrets:get', 'rule:all_but_audit'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/policies/consumers.py0000666000175100017510000000360213245511001022525 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy rules = [ policy.RuleDefault('consumer:get', 'rule:admin or rule:observer or rule:creator or ' 'rule:audit or rule:container_non_private_read or ' 'rule:container_project_creator or ' 'rule:container_project_admin or ' 'rule:container_acl_read'), policy.RuleDefault('consumers:get', 'rule:admin or rule:observer or rule:creator or ' 'rule:audit or rule:container_non_private_read or ' 'rule:container_project_creator or ' 'rule:container_project_admin or ' 'rule:container_acl_read'), policy.RuleDefault('consumers:post', 'rule:admin or rule:container_non_private_read or ' 'rule:container_project_creator or ' 'rule:container_project_admin or ' 'rule:container_acl_read'), policy.RuleDefault('consumers:delete', 'rule:admin or rule:container_non_private_read or ' 'rule:container_project_creator or ' 'rule:container_project_admin or ' 'rule:container_acl_read'), ] def list_rules(): return rules barbican-6.0.0/barbican/common/__init__.py0000666000175100017510000000000013245511001020424 0ustar zuulzuul00000000000000barbican-6.0.0/barbican/common/quota.py0000666000175100017510000001701013245511001020027 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cisco Systems # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from barbican.common import config from barbican.common import exception from barbican.common import hrefs from barbican.common import resources as res from barbican.model import repositories as repo # All negative values will be treated as unlimited UNLIMITED_VALUE = -1 DISABLED_VALUE = 0 CONF = config.CONF class QuotaDriver(object): """Driver to enforce quotas and obtain quota information.""" def __init__(self): self.repo = repo.get_project_quotas_repository() def _get_resources(self): """List of resources that can be constrained by a quota""" return ['secrets', 'orders', 'containers', 'consumers', 'cas'] def _get_defaults(self): """Return list of default quotas""" quotas = { 'secrets': CONF.quotas.quota_secrets, 'orders': CONF.quotas.quota_orders, 'containers': CONF.quotas.quota_containers, 'consumers': CONF.quotas.quota_consumers, 'cas': CONF.quotas.quota_cas } return quotas def _extract_project_quotas(self, project_quotas_model): """Convert project quotas model to Python dict :param project_quotas_model: Model containing quota information :return: Python dict containing quota information """ resp_quotas = {} for resource in self._get_resources(): resp_quotas[resource] = getattr(project_quotas_model, resource) return resp_quotas def _compute_effective_quotas(self, configured_quotas): """Merge configured and default quota information When a quota value is not set, use the default value :param configured_quotas: configured quota values :return: effective quotas """ default_quotas = self._get_defaults() resp_quotas = dict(configured_quotas) for resource, quota in resp_quotas.items(): if quota is None: resp_quotas[resource] = default_quotas[resource] return resp_quotas def get_effective_quotas(self, external_project_id): """Collect and return the effective quotas for a project :param external_project_id: external ID of current project :return: dict with effective quotas """ try: retrieved_project_quotas = self.repo.get_by_external_project_id( external_project_id) except exception.NotFound: resp_quotas = self._get_defaults() else: resp_quotas = self._compute_effective_quotas( self._extract_project_quotas(retrieved_project_quotas)) return resp_quotas def is_unlimited_value(self, v): """A helper method to check for unlimited value.""" return v <= UNLIMITED_VALUE def is_disabled_value(self, v): """A helper method to check for disabled value.""" return v == DISABLED_VALUE def set_project_quotas(self, external_project_id, parsed_project_quotas): """Create a new database entry, or update existing one :param external_project_id: ID of project whose quotas are to be set :param parsed_project_quotas: quota values to save in database :return: None """ project = res.get_or_create_project(external_project_id) self.repo.create_or_update_by_project_id(project.id, parsed_project_quotas) # commit to DB to avoid async issues if the enforcer is called from # another thread repo.commit() def get_project_quotas(self, external_project_id): """Retrieve configured quota information from database :param external_project_id: ID of project for whose values are wanted :return: the values """ try: retrieved_project_quotas = self.repo.get_by_external_project_id( external_project_id) except exception.NotFound: return None resp_quotas = self._extract_project_quotas(retrieved_project_quotas) resp = {'project_quotas': resp_quotas} return resp def get_project_quotas_list(self, offset_arg=None, limit_arg=None): """Return a dict and list of all configured quota information :return: a dict and list of a page of quota config info """ retrieved_project_quotas, offset, limit, total =\ self.repo.get_by_create_date(offset_arg=offset_arg, limit_arg=limit_arg, suppress_exception=True) resp_quotas = [] for quotas in retrieved_project_quotas: list_item = {'project_id': quotas.project.external_id, 'project_quotas': self._extract_project_quotas(quotas)} resp_quotas.append(list_item) resp = {'project_quotas': resp_quotas} resp_overall = hrefs.add_nav_hrefs( 'project_quotas', offset, limit, total, resp) resp_overall.update({'total': total}) return resp_overall def delete_project_quotas(self, external_project_id): """Remove configured quota information from database :param external_project_id: ID of project whose quotas will be deleted :raises NotFound: if project has no configured values :return: None """ self.repo.delete_by_external_project_id(external_project_id) def get_quotas(self, external_project_id): """Get the effective quotas for a project Effective quotas are based on both configured and default values :param external_project_id: ID of project for which to get quotas :return: dict of effective quota values """ resp_quotas = self.get_effective_quotas(external_project_id) resp = {'quotas': resp_quotas} return resp class QuotaEnforcer(object): """Checks quotas limits and current resource usage levels""" def __init__(self, resource_type, resource_repo): self.quota_driver = QuotaDriver() self.resource_type = resource_type self.resource_repo = resource_repo def enforce(self, project): """Enforce the quota limit for the resource :param project: the project object corresponding to the sender :raises QuotaReached: exception raised if quota forbids request :return: None """ quotas = self.quota_driver.get_effective_quotas(project.external_id) quota = quotas[self.resource_type] reached = False count = 0 if self.quota_driver.is_unlimited_value(quota): pass elif self.quota_driver.is_disabled_value(quota): reached = True else: count = self.resource_repo.get_count(project.id) if count >= quota: reached = True if reached: raise exception.QuotaReached( external_project_id=project.external_id, resource_type=self.resource_type, quota=quota) barbican-6.0.0/barbican/common/utils.py0000666000175100017510000001563713245511001020053 0ustar zuulzuul00000000000000# Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Common utilities for Barbican. """ import collections import importlib import mimetypes import uuid from oslo_log import log from oslo_utils import uuidutils import pecan import six from six.moves.urllib import parse from barbican.common import config from barbican import i18n as u CONF = config.CONF # Current API version API_VERSION = 'v1' # Added here to remove cyclic dependency. # In barbican.model.models module SecretType.OPAQUE was imported from # barbican.plugin.interface.secret_store which introduces a cyclic dependency # if `secret_store` plugin needs to use db model classes. So moving shared # value to another common python module which is already imported in both. SECRET_TYPE_OPAQUE = "opaque" def _do_allow_certain_content_types(func, content_types_list=[]): # Allows you to bypass pecan's content-type restrictions cfg = pecan.util._cfg(func) cfg.setdefault('content_types', {}) cfg['content_types'].update((value, '') for value in content_types_list) return func def allow_certain_content_types(*content_types_list): def _wrapper(func): return _do_allow_certain_content_types(func, content_types_list) return _wrapper def allow_all_content_types(f): return _do_allow_certain_content_types(f, mimetypes.types_map.values()) def get_base_url_from_request(): """Derive base url from wsgi request if CONF.host_href is not set Use host.href as base URL if its set in barbican.conf. If its not set, then derives value from wsgi request. WSGI request uses HOST header or HTTP_X_FORWARDED_FOR header (in case of proxy) for host + port part of its url. Proxies can also set HTTP_X_FORWARDED_PROTO header for indicating http vs https. Some of unit tests does not have pecan context that's why using request attr check on pecan instance. """ if not CONF.host_href and hasattr(pecan.request, 'url'): p_url = parse.urlsplit(pecan.request.url) if p_url.path: base_url = '%s://%s%s' % (p_url.scheme, p_url.netloc, p_url.path) else: base_url = '%s://%s' % (p_url.scheme, p_url.netloc) return base_url else: # when host_href is set or flow is not within wsgi request context return CONF.host_href def hostname_for_refs(resource=None): """Return the HATEOAS-style return URI reference for this service.""" base_url = get_base_url_from_request() ref = ['{base}/{version}'.format(base=base_url, version=API_VERSION)] if resource: ref.append('/' + resource) return ''.join(ref) # Return a logger instance. # Note: Centralize access to the logger to avoid the dreaded # 'ArgsAlreadyParsedError: arguments already parsed: cannot # register CLI option' # error. def getLogger(name): return log.getLogger(name) def get_accepted_encodings(req): """Returns a list of client acceptable encodings sorted by q value. For details see: http://tools.ietf.org/html/rfc2616#section-14.3 :param req: request object :returns: list of client acceptable encodings sorted by q value. """ header = req.get_header('Accept-Encoding') return get_accepted_encodings_direct(header) def get_accepted_encodings_direct(content_encoding_header): """Returns a list of client acceptable encodings sorted by q value. For details see: http://tools.ietf.org/html/rfc2616#section-14.3 :param req: request object :returns: list of client acceptable encodings sorted by q value. """ if content_encoding_header is None: return None Encoding = collections.namedtuple('Encoding', ['coding', 'quality']) encodings = list() for enc in content_encoding_header.split(','): if ';' in enc: coding, qvalue = enc.split(';') try: qvalue = qvalue.split('=')[1] quality = float(qvalue.strip()) except ValueError: # can't convert quality to float return None if quality > 1.0 or quality < 0.0: # quality is outside valid range return None if quality > 0.0: encodings.append(Encoding(coding.strip(), quality)) else: encodings.append(Encoding(enc.strip(), 1)) # Sort the encodings by quality encodings = sorted(encodings, key=lambda e: e.quality, reverse=True) return [encoding.coding for encoding in encodings] def generate_fullname_for(instance): """Produce a fully qualified class name for the specified instance. :param instance: The instance to generate information from. :return: A string providing the package.module information for the instance. :raises: ValueError if the given instance is null """ if not instance: raise ValueError(u._("Cannot generate a fullname for a null instance")) module = type(instance).__module__ class_name = type(instance).__name__ if module is None or module == six.moves.builtins.__name__: return class_name return "{module}.{class_name}".format(module=module, class_name=class_name) def get_class_for(module_name, class_name): """Create a Python class from its text-specified components.""" # Load the module via name, raising ImportError if module cannot be # loaded. python_module = importlib.import_module(module_name) # Load and return the resolved Python class, raising AttributeError if # class cannot be found. return getattr(python_module, class_name) def generate_uuid(): return uuidutils.generate_uuid() def is_multiple_backends_enabled(): try: secretstore_conf = config.get_module_config('secretstore') except KeyError: # Ensure module is initialized from barbican.plugin.interface import secret_store # nopep8 secretstore_conf = config.get_module_config('secretstore') return secretstore_conf.secretstore.enable_multiple_secret_stores def validate_id_is_uuid(input_id, version=4): """Validates provided id is uuid4 format value. Returns true when provided id is a valid version 4 uuid otherwise returns False. This validation is to be used only for ids which are generated by barbican (e.g. not for keystone project_id) """ try: value = uuid.UUID(input_id, version=version) except Exception: return False return str(value) == input_id barbican-6.0.0/barbican/hacking/0000775000175100017510000000000013245511177016455 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/hacking/__init__.py0000666000175100017510000000000013245511001020540 0ustar zuulzuul00000000000000barbican-6.0.0/barbican/hacking/checks.py0000666000175100017510000002325213245511001020257 0ustar zuulzuul00000000000000# Copyright (c) 2016, GohighSec # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ast import re import six import pep8 """ Guidelines for writing new hacking checks - Use only for Barbican specific tests. OpenStack general tests should be submitted to the common 'hacking' module. - Pick numbers in the range B3xx. Find the current test with the highest allocated number and then pick the next value. - Keep the test method code in the source file ordered based on the B3xx value. - List the new rule in the top level HACKING.rst file - Add test cases for each new rule to barbican/tests/test_hacking.py """ oslo_namespace_imports = re.compile(r"from[\s]*oslo[.](.*)") dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)") assert_no_xrange_re = re.compile(r"\s*xrange\s*\(") assert_True = re.compile(r".*assertEqual\(True, .*\)") assert_None = re.compile(r".*assertEqual\(None, .*\)") assert_Not_Equal = re.compile(r".*assertNotEqual\(None, .*\)") assert_Is_Not = re.compile(r".*assertIsNot\(None, .*\)") no_log_warn = re.compile(r".*LOG.warn\(.*\)") class BaseASTChecker(ast.NodeVisitor): """Provides a simple framework for writing AST-based checks. Subclasses should implement visit_* methods like any other AST visitor implementation. When they detect an error for a particular node the method should call ``self.add_error(offending_node)``. Details about where in the code the error occurred will be pulled from the node object. Subclasses should also provide a class variable named CHECK_DESC to be used for the human readable error message. """ CHECK_DESC = 'No check message specified' def __init__(self, tree, filename): """This object is created automatically by pep8. :param tree: an AST tree :param filename: name of the file being analyzed (ignored by our checks) """ self._tree = tree self._errors = [] def run(self): """Called automatically by pep8.""" self.visit(self._tree) return self._errors def add_error(self, node, message=None): """Add an error caused by a node to the list of errors for pep8.""" message = message or self.CHECK_DESC error = (node.lineno, node.col_offset, message, self.__class__) self._errors.append(error) def _check_call_names(self, call_node, names): if isinstance(call_node, ast.Call): if isinstance(call_node.func, ast.Name): if call_node.func.id in names: return True return False class CheckLoggingFormatArgs(BaseASTChecker): """Check for improper use of logging format arguments. LOG.debug("Volume %s caught fire and is at %d degrees C and climbing.", ('volume1', 500)) The format arguments should not be a tuple as it is easy to miss. """ CHECK_DESC = 'B310 Log method arguments should not be a tuple.' LOG_METHODS = [ 'debug', 'info', 'warn', 'warning', 'error', 'exception', 'critical', 'fatal', 'trace', 'log' ] def _find_name(self, node): """Return the fully qualified name or a Name or Attribute.""" if isinstance(node, ast.Name): return node.id elif (isinstance(node, ast.Attribute) and isinstance(node.value, (ast.Name, ast.Attribute))): method_name = node.attr obj_name = self._find_name(node.value) if obj_name is None: return None return obj_name + '.' + method_name elif isinstance(node, six.string_types): return node else: # could be Subscript, Call or many more return None def visit_Call(self, node): """Look for the 'LOG.*' calls.""" # extract the obj_name and method_name if isinstance(node.func, ast.Attribute): obj_name = self._find_name(node.func.value) if isinstance(node.func.value, ast.Name): method_name = node.func.attr elif isinstance(node.func.value, ast.Attribute): obj_name = self._find_name(node.func.value) method_name = node.func.attr else: # could be Subscript, Call or many more return super(CheckLoggingFormatArgs, self).generic_visit(node) # obj must be a logger instance and method must be a log helper if (obj_name != 'LOG' or method_name not in self.LOG_METHODS): return super(CheckLoggingFormatArgs, self).generic_visit(node) # the call must have arguments if not len(node.args): return super(CheckLoggingFormatArgs, self).generic_visit(node) # any argument should not be a tuple for arg in node.args: if isinstance(arg, ast.Tuple): self.add_error(arg) return super(CheckLoggingFormatArgs, self).generic_visit(node) class CheckForStrUnicodeExc(BaseASTChecker): """Checks for the use of str() or unicode() on an exception. This currently only handles the case where str() or unicode() is used in the scope of an exception handler. If the exception is passed into a function, returned from an assertRaises, or used on an exception created in the same scope, this does not catch it. """ CHECK_DESC = ('B314 str() and unicode() cannot be used on an ' 'exception. Remove or use six.text_type()') def __init__(self, tree, filename): super(CheckForStrUnicodeExc, self).__init__(tree, filename) self.name = [] self.already_checked = [] # Python 2 def visit_TryExcept(self, node): for handler in node.handlers: if handler.name: self.name.append(handler.name.id) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) # Python 3 def visit_ExceptHandler(self, node): if node.name: self.name.append(node.name) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) def visit_Call(self, node): if self._check_call_names(node, ['str', 'unicode']): if node not in self.already_checked: self.already_checked.append(node) if isinstance(node.args[0], ast.Name): if node.args[0].id in self.name: self.add_error(node.args[0]) super(CheckForStrUnicodeExc, self).generic_visit(node) def check_oslo_namespace_imports(logical_line, physical_line, filename): """'oslo_' should be used instead of 'oslo.' B317 """ if pep8.noqa(physical_line): return if re.match(oslo_namespace_imports, logical_line): msg = ("B317: '%s' must be used instead of '%s'.") % ( logical_line.replace('oslo.', 'oslo_'), logical_line) yield(0, msg) def dict_constructor_with_list_copy(logical_line): """Use a dict comprehension instead of a dict constructor B318 """ msg = ("B318: Must use a dict comprehension instead of a dict constructor" " with a sequence of key-value pairs." ) if dict_constructor_with_list_copy_re.match(logical_line): yield (0, msg) def no_xrange(logical_line): """Do not use 'xrange' B319 """ if assert_no_xrange_re.match(logical_line): yield(0, "B319: Do not use xrange().") def validate_assertTrue(logical_line): """Use 'assertTrue' instead of 'assertEqual' B312 """ if re.match(assert_True, logical_line): msg = ("B312: Unit tests should use assertTrue(value) instead" " of using assertEqual(True, value).") yield(0, msg) def validate_assertIsNone(logical_line): """Use 'assertIsNone' instead of 'assertEqual' B311 """ if re.match(assert_None, logical_line): msg = ("B311: Unit tests should use assertIsNone(value) instead" " of using assertEqual(None, value).") yield(0, msg) def no_log_warn_check(logical_line): """Disallow 'LOG.warn' B320 """ msg = ("B320: LOG.warn is deprecated, please use LOG.warning!") if re.match(no_log_warn, logical_line): yield(0, msg) def validate_assertIsNotNone(logical_line): """Use 'assertIsNotNone' B321 """ if re.match(assert_Not_Equal, logical_line) or \ re.match(assert_Is_Not, logical_line): msg = ("B321: Unit tests should use assertIsNotNone(value) instead" " of using assertNotEqual(None, value) or" " assertIsNot(None, value).") yield(0, msg) def factory(register): register(CheckForStrUnicodeExc) register(CheckLoggingFormatArgs) register(check_oslo_namespace_imports) register(dict_constructor_with_list_copy) register(no_xrange) register(validate_assertTrue) register(validate_assertIsNone) register(no_log_warn_check) register(validate_assertIsNotNone) barbican-6.0.0/barbican/cmd/0000775000175100017510000000000013245511177015614 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/cmd/pkcs11_migrate_kek_signatures.py0000666000175100017510000001316613245511001024071 0ustar zuulzuul00000000000000#!/usr/bin/env python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import argparse import base64 import json import six import traceback from oslo_db.sqlalchemy import session from sqlalchemy import orm from sqlalchemy.orm import scoping from barbican.common import utils from barbican.model import models from barbican.plugin.crypto import p11_crypto from barbican.plugin.crypto.pkcs11 import P11CryptoPluginException # Use config values from p11_crypto CONF = p11_crypto.CONF class KekSignatureMigrator(object): def __init__(self, db_connection, library_path, login, slot_id): self.dry_run = False self.db_engine = session.create_engine(db_connection) self._session_creator = scoping.scoped_session( orm.sessionmaker( bind=self.db_engine, autocommit=True ) ) self.crypto_plugin = p11_crypto.P11CryptoPlugin(CONF) self.plugin_name = utils.generate_fullname_for(self.crypto_plugin) self.pkcs11 = self.crypto_plugin.pkcs11 self.session = self.pkcs11.get_session() def recalc_kek_hmac(self, project, kek): with self.db_session.begin(): meta_dict = json.loads(kek.plugin_meta) iv = base64.b64decode(meta_dict['iv']) wrapped_key = base64.b64decode(meta_dict['wrapped_key']) hmac = base64.b64decode(meta_dict['hmac']) kek_data = iv + wrapped_key hmac_key = self.pkcs11.get_key_handle( meta_dict['hmac_label'], self.session) # Verify if hmac signature validates with new method try: self.pkcs11.verify_hmac(hmac_key, hmac, kek_data, self.session) sig_good = True except P11CryptoPluginException as e: if 'CKR_SIGNATURE_INVALID' in six.text_type(e): sig_good = False else: raise if sig_good: msg = 'Skipping KEK {}, good signature' print(msg.format(kek.kek_label)) return # Previous method failed. # Verify if hmac signature validates with old method try: self.pkcs11.verify_hmac( hmac_key, hmac, wrapped_key, self.session ) sig_bad = True except P11CryptoPluginException as e: if 'CKR_SIGNATURE_INVALID' in six.text_type(e): sig_bad = False else: raise if not sig_bad: msg = "Skipping KEK {}, can not validate with either method!" print(msg.format(kek.kek_label)) return if self.dry_run: msg = 'KEK {} needs recalculation' print(msg.format(kek.kek_label)) return # Calculate new HMAC new_hmac = self.pkcs11.compute_hmac( hmac_key, kek_data, self.session ) # Update KEK plugin_meta with new hmac signature meta_dict['hmac'] = base64.b64encode(new_hmac) kek.plugin_meta = p11_crypto.json_dumps_compact(meta_dict) def get_keks_for_project(self, project): keks = [] with self.db_session.begin() as transaction: print('Retrieving KEKs for Project {}'.format(project.id)) query = transaction.session.query(models.KEKDatum) query = query.filter_by(project_id=project.id) query = query.filter_by(plugin_name=self.plugin_name) keks = query.all() return keks def get_projects(self): print('Retrieving all available projects') projects = [] with self.db_session.begin() as transaction: projects = transaction.session.query(models.Project).all() return projects @property def db_session(self): return self._session_creator() def execute(self, dry_run=True): self.dry_run = dry_run if self.dry_run: print('-- Running in dry-run mode --') projects = self.get_projects() for project in projects: keks = self.get_keks_for_project(project) for kek in keks: try: self.recalc_kek_hmac(project, kek) except Exception: print('Error occurred! SQLAlchemy automatically rolled-' 'back the transaction') traceback.print_exc() def main(): script_desc = ( 'Utility to migrate existing project KEK signatures to include IV.' ) parser = argparse.ArgumentParser(description=script_desc) parser.add_argument( '--dry-run', action='store_true', help='Displays changes that will be made (Non-destructive)' ) args = parser.parse_args() migrator = KekSignatureMigrator( db_connection=CONF.sql_connection, library_path=CONF.p11_crypto_plugin.library_path, login=CONF.p11_crypto_plugin.login, slot_id=CONF.p11_crypto_plugin.slot_id ) migrator.execute(args.dry_run) if __name__ == '__main__': main() barbican-6.0.0/barbican/cmd/db_manage.py0000666000175100017510000001623213245511001020053 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2010-2015 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os import sys sys.path.insert(0, os.getcwd()) from barbican.common import config from barbican.model import clean from barbican.model.migration import commands from oslo_log import log # Import and configure logging. CONF = config.CONF log.setup(CONF, 'barbican') LOG = log.getLogger(__name__) class DatabaseManager(object): """Database Manager class. Builds and executes a CLI parser to manage the Barbican database This extends the Alembic commands. """ def __init__(self, conf): self.conf = conf self.parser = self.get_main_parser() self.subparsers = self.parser.add_subparsers( title='subcommands', description='Action to perform') self.add_revision_args() self.add_upgrade_args() self.add_history_args() self.add_current_args() self.add_clean_args() def get_main_parser(self): """Create top-level parser and arguments.""" parser = argparse.ArgumentParser(description='Barbican DB manager.') parser.add_argument('--dburl', '-d', default=self.conf.sql_connection, help='URL to the database.') return parser def add_revision_args(self): """Create 'revision' command parser and arguments.""" create_parser = self.subparsers.add_parser('revision', help='Create a ' 'new DB version file.') create_parser.add_argument('--message', '-m', default='DB change', help='the message for the DB change') create_parser.add_argument('--autogenerate', help='autogenerate from models', action='store_true') create_parser.set_defaults(func=self.revision) def add_upgrade_args(self): """Create 'upgrade' command parser and arguments.""" create_parser = self.subparsers.add_parser('upgrade', help='Upgrade to a ' 'future version DB ' 'version file') create_parser.add_argument('--version', '-v', default='head', help='the version to upgrade to, or else ' 'the latest/head if not specified.') create_parser.set_defaults(func=self.upgrade) def add_history_args(self): """Create 'history' command parser and arguments.""" create_parser = self.subparsers.add_parser( 'history', help='List changeset scripts in chronological order.') create_parser.add_argument('--verbose', '-V', action="store_true", help='Show full information about the ' 'revisions.') create_parser.set_defaults(func=self.history) def add_current_args(self): """Create 'current' command parser and arguments.""" create_parser = self.subparsers.add_parser( 'current', help='Display the current revision for a database.') create_parser.add_argument('--verbose', '-V', action="store_true", help='Show full information about the ' 'revision.') create_parser.set_defaults(func=self.current) def add_clean_args(self): """Create 'clean' command parser and arguments.""" create_parser = self.subparsers.add_parser( 'clean', help='Clean up soft deletions in the database') create_parser.add_argument( '--min-days', '-m', type=int, default=90, help='minimum number of days to keep soft deletions. default is' ' %(default)s days.') create_parser.add_argument('--clean-unassociated-projects', '-p', action="store_true", help='Remove projects that have no ' 'associated resources.') create_parser.add_argument('--soft-delete-expired-secrets', '-e', action="store_true", help='Soft delete expired secrets.') create_parser.add_argument('--verbose', '-V', action='store_true', help='Show full information about the' ' cleanup') create_parser.add_argument('--log-file', '-L', default=CONF.log_file, type=str, help='Set log file location. ' 'Default value for log_file can be ' 'found in barbican.conf') create_parser.set_defaults(func=self.clean) def revision(self, args): """Process the 'revision' Alembic command.""" commands.generate(autogenerate=args.autogenerate, message=args.message, sql_url=args.dburl) def upgrade(self, args): """Process the 'upgrade' Alembic command.""" LOG.debug("Performing database schema migration...") commands.upgrade(to_version=args.version, sql_url=args.dburl) def history(self, args): commands.history(args.verbose, sql_url=args.dburl) def current(self, args): commands.current(args.verbose, sql_url=args.dburl) def clean(self, args): clean.clean_command( sql_url=args.dburl, min_num_days=args.min_days, do_clean_unassociated_projects=args.clean_unassociated_projects, do_soft_delete_expired_secrets=args.soft_delete_expired_secrets, verbose=args.verbose, log_file=args.log_file) def execute(self): """Parse the command line arguments.""" args = self.parser.parse_args() # Perform other setup here... args.func(args) def _exception_is_successful_exit(thrown_exception): return (isinstance(thrown_exception, SystemExit) and (thrown_exception.code is None or thrown_exception.code == 0)) def main(): try: dm = DatabaseManager(CONF) dm.execute() except Exception as ex: if not _exception_is_successful_exit(ex): LOG.exception('Problem seen trying to run barbican db manage') sys.stderr.write("ERROR: {0}\n".format(ex)) sys.exit(1) if __name__ == '__main__': main() barbican-6.0.0/barbican/cmd/functionaltests/0000775000175100017510000000000013245511177021041 5ustar zuulzuul00000000000000barbican-6.0.0/barbican/cmd/functionaltests/__init__.py0000666000175100017510000000000013245511001023124 0ustar zuulzuul00000000000000barbican-6.0.0/barbican/cmd/functionaltests/.testr.conf0000666000175100017510000000050013245511001023106 0ustar zuulzuul00000000000000[DEFAULT] test_command= OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \ ${PYTHON:-python} -m coverage run -a -m subunit.run discover -s ./cmd -t . $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list barbican-6.0.0/barbican/cmd/functionaltests/test_db_manage.py0000666000175100017510000002606513245511001024344 0ustar zuulzuul00000000000000# Copyright (c) 2016 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import time from testtools import testcase from barbican.common import config as barbican_config from barbican.tests import utils from functionaltests.api import base from functionaltests.api.v1.behaviors import container_behaviors from functionaltests.api.v1.behaviors import secret_behaviors from functionaltests.api.v1.models import container_models from functionaltests.api.v1.models import secret_models from functionaltests.common import config from oslo_db.sqlalchemy import session # Import and configure logging. BCONF = barbican_config.CONF CONF = config.get_config() admin_a = CONF.rbac_users.admin_a admin_b = CONF.rbac_users.admin_b class DBManageTestCase(base.TestCase): def setUp(self): super(DBManageTestCase, self).setUp() self.sbehaviors = secret_behaviors.SecretBehaviors(self.client) self.cbehaviors = container_behaviors.ContainerBehaviors(self.client) db_url = BCONF.sql_connection time.sleep(5) # Setup session for tests to query DB engine = session.create_engine(db_url) self.conn = engine.connect() def tearDown(self): super(DBManageTestCase, self).tearDown() self.conn.close() self.sbehaviors.delete_all_created_secrets() self.cbehaviors.delete_all_created_containers() def _create_secret_list(self, user, delete=False, expiration="2050-02-28T19:14:44.180394"): secret_defaults_data = { "name": "AES key", "expiration": expiration, "algorithm": "aes", "bit_length": 256, "mode": "cbc", "payload": "gF6+lLoF3ohA9aPRpt+6bQ==", "payload_content_type": "application/octet-stream", "payload_content_encoding": "base64", } secret_list = [] for i in range(0, 5): secret_model = secret_models.SecretModel(**secret_defaults_data) resp, secret_ref = self.sbehaviors.create_secret(secret_model, user_name=user) self.assertEqual(resp.status_code, 201) self.assertIsNotNone(secret_ref) secret_list.append(secret_ref) if delete is True: self._delete_secret_list(secret_list, user) return secret_list def _create_container_uuid_list( self, user, secret_expiration="2050-02-28T19:14:44.180394", delete_secret=False, delete_container=False): secret_list = self._create_secret_list( user=user, expiration=secret_expiration ) container_data = { "name": "containername", "type": "generic", "secret_refs": [ { "name": "secret", "secret_ref": secret_list[0] } ] } container_list = [] for i in range(0, 5): container_model = container_models.ContainerModel(**container_data) post_container_resp, c_ref = self.cbehaviors.create_container( container_model, user_name=user) self.assertEqual(post_container_resp.status_code, 201) self.assertIsNotNone(c_ref) container_list.append(c_ref) if delete_container is True: self._delete_container_list(container_list, user) if delete_secret is True: self._delete_secret_list(secret_list) return container_list def _delete_secret_list(self, secret_list, user): for secret in secret_list: del_resp = self.sbehaviors.delete_secret(secret, user_name=user) self.assertEqual(del_resp.status_code, 204) def _delete_container_list(self, container_list, user): for container in container_list: del_resp = self.cbehaviors.delete_container(container, user_name=user) self.assertEqual(del_resp.status_code, 204) def _get_uuid(self, ref): uuid = ref.split('/')[-1] return uuid @testcase.attr('positive') def test_active_secret_not_deleted(self): """Verify that active secrets are not removed""" project_a_secrets = self._create_secret_list(user=admin_a) project_b_secrets = self._create_secret_list(user=admin_b) os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e") results = self.conn.execute("select * from secrets") secret_list = [] for row in results: secret_list.append(str(row[0])) for secret in project_a_secrets: secret_uuid = self._get_uuid(secret) self.assertIn(secret_uuid, secret_list) for secret in project_b_secrets: secret_uuid = self._get_uuid(secret) self.assertIn(secret_uuid, secret_list) @testcase.attr('positive') def test_soft_deleted_secrets_are_removed(self): """Test that soft deleted secrets are removed""" project_a_secrets = self._create_secret_list(user=admin_a, delete=True) project_b_secrets = self._create_secret_list(user=admin_b, delete=True) os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e") results = self.conn.execute("select * from secrets") secret_list = [] for row in results: secret_list.append(str(row[0])) for secret in project_a_secrets: secret_uuid = self._get_uuid(secret) self.assertNotIn(secret_uuid, secret_list) for secret in project_b_secrets: secret_uuid = self._get_uuid(secret) self.assertNotIn(secret_uuid, secret_list) @testcase.attr('positive') def test_expired_secrets_are_not_removed_from_db(self): """Test expired secrests are left in soft deleted state. Currently this clean will set the threshold at the start of the test. Expired secrets will be deleted and the deleted at date will now be later then the threshold date. """ current_time = utils.create_timestamp_w_tz_and_offset(seconds=10) project_a_secrets = self._create_secret_list(user=admin_a, expiration=current_time) project_b_secrets = self._create_secret_list(user=admin_b, expiration=current_time) time.sleep(10) os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e") results = self.conn.execute("select * from secrets") secret_list = [] for row in results: secret_list.append(str(row[0])) for secret in project_a_secrets: secret_uuid = self._get_uuid(secret) self.assertIn(secret_uuid, secret_list) for secret in project_b_secrets: secret_uuid = self._get_uuid(secret) self.assertIn(secret_uuid, secret_list) @testcase.attr('positive') def test_no_soft_deleted_secrets_in_db(self): """Test that no soft deleted secrets are in db""" os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e") results = self.conn.execute("select * from secrets where deleted=1") secret_list = [] for row in results: secret_list.append(str(row[0])) self.assertEqual(len(secret_list), 0) @testcase.attr('positive') def test_active_containers_not_deleted(self): """Active containers are not deleted""" project_a_containers = self._create_container_uuid_list( user=admin_a) project_b_containers = self._create_container_uuid_list( user=admin_b) os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e") results = self.conn.execute("select * from containers") container_list = [] for row in results: container_list.append(str(row[0])) for container in project_a_containers: container_uuid = self._get_uuid(container) self.assertIn(container_uuid, container_list) for container in project_b_containers: container_uuid = self._get_uuid(container) self.assertIn(container_uuid, container_list) @testcase.attr('positive') def test_cleanup_soft_deleted_containers(self): """Soft deleted containers are deleted""" project_a_delete_containers = self._create_container_uuid_list( user=admin_a, delete_container=True) project_b_delete_containers = self._create_container_uuid_list( user=admin_b, delete_container=True) os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e") results = self.conn.execute("select * from containers") container_list = [] for row in results: container_list.append(str(row[0])) for container in project_a_delete_containers: container_uuid = self._get_uuid(container) self.assertNotIn(container_uuid, container_list) for container in project_b_delete_containers: container_uuid = self._get_uuid(container) self.assertNotIn(container_uuid, container_list) @testcase.attr('positive') def test_containers_with_expired_secrets_are_deleted(self): """Containers with expired secrets are deleted""" current_time = utils.create_timestamp_w_tz_and_offset(seconds=10) project_a_delete_containers = self._create_container_uuid_list( user=admin_a, delete_container=True, secret_expiration=current_time) project_b_delete_containers = self._create_container_uuid_list( user=admin_b, delete_container=True, secret_expiration=current_time) time.sleep(10) os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e") results = self.conn.execute("select * from containers") container_list = [] for row in results: container_list.append(str(row[0])) for container in project_a_delete_containers: container_uuid = self._get_uuid(container) self.assertNotIn(container_uuid, container_list) for container in project_b_delete_containers: container_uuid = self._get_uuid(container) self.assertNotIn(container_uuid, container_list) barbican-6.0.0/barbican/cmd/__init__.py0000666000175100017510000000000013245511001017677 0ustar zuulzuul00000000000000barbican-6.0.0/barbican/cmd/worker.py0000666000175100017510000000412713245511001017467 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013-2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Barbican worker server. """ import eventlet import os import sys # Oslo messaging RPC server uses eventlet. eventlet.monkey_patch() # 'Borrowed' from the Glance project: # If ../barbican/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'barbican', '__init__.py')): sys.path.insert(0, possible_topdir) from barbican.common import config from barbican import queue from barbican.queue import server from barbican import version from oslo_log import log from oslo_service import service def fail(returncode, e): sys.stderr.write("ERROR: {0}\n".format(e)) sys.exit(returncode) def main(): try: CONF = config.CONF CONF(sys.argv[1:], project='barbican', version=version.version_info.version_string) # Import and configure logging. log.setup(CONF, 'barbican') LOG = log.getLogger(__name__) LOG.debug("Booting up Barbican worker node...") # Queuing initialization queue.init(CONF) service.launch( CONF, server.TaskServer(), workers=CONF.queue.asynchronous_workers ).wait() except RuntimeError as e: fail(1, e) if __name__ == '__main__': main() barbican-6.0.0/barbican/cmd/barbican_manage.py0000666000175100017510000003312513245511001021227 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2010-2015 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ CLI interface for barbican management """ from __future__ import print_function import argparse import sys from oslo_config import cfg from oslo_log import log as logging from barbican.cmd import pkcs11_kek_rewrap as pkcs11_rewrap from barbican.common import config from barbican.model import clean from barbican.model.migration import commands from barbican.model import sync from barbican.plugin.crypto import pkcs11 import barbican.version CONF = cfg.CONF LOG = logging.getLogger(__name__) # Decorators for actions def args(*args, **kwargs): def _decorator(func): func.__dict__.setdefault('args', []).insert(0, (args, kwargs)) return func return _decorator class DbCommands(object): """Class for managing barbican database""" description = "Subcommands for managing barbican database" clean_description = "Clean up soft deletions in the database" @args('--db-url', '-d', metavar='', dest='dburl', help='barbican database URL') @args('--min-days', '-m', metavar='', dest='min_days', type=int, default=90, help='minimum number of days to keep soft deletions. ' 'default is %(default)s days.') @args('--verbose', '-V', action='store_true', dest='verbose', default=False, help='Show verbose information about the clean up.') @args('--log-file', '-L', metavar='', type=str, default=None, dest='log_file', help='Set log file location. ' 'Default value for log_file can be found in barbican.conf') @args('--clean-unassociated-projects', '-p', action='store_true', dest='do_clean_unassociated_projects', default=False, help='Remove projects that have no ' 'associated resources.') @args('--soft-delete-expired-secrets', '-e', action='store_true', dest='do_soft_delete_expired_secrets', default=False, help='Soft delete secrets that are expired.') def clean(self, dburl=None, min_days=None, verbose=None, log_file=None, do_clean_unassociated_projects=None, do_soft_delete_expired_secrets=None): """Clean soft deletions in the database""" if dburl is None: dburl = CONF.sql_connection if log_file is None: log_file = CONF.log_file clean.clean_command( sql_url=dburl, min_num_days=min_days, do_clean_unassociated_projects=do_clean_unassociated_projects, do_soft_delete_expired_secrets=do_soft_delete_expired_secrets, verbose=verbose, log_file=log_file) revision_description = "Create a new database version file" @args('--db-url', '-d', metavar='', dest='dburl', help='barbican database URL') @args('--message', '-m', metavar='', default='DB change', help='the message for the DB change') @args('--autogenerate', action="store_true", dest='autogen', default=False, help='autogenerate from models') def revision(self, dburl=None, message=None, autogen=None): """Process the 'revision' Alembic command.""" if dburl is None: commands.generate(autogenerate=autogen, message=str(message), sql_url=CONF.sql_connection) else: commands.generate(autogenerate=autogen, message=str(message), sql_url=str(dburl)) upgrade_description = "Upgrade to a future database version" @args('--db-url', '-d', metavar='', dest='dburl', help='barbican database URL') @args('--version', '-v', metavar='', default='head', help='the version to upgrade to, or else ' 'the latest/head if not specified.') def upgrade(self, dburl=None, version=None): """Process the 'upgrade' Alembic command.""" if dburl is None: commands.upgrade(to_version=str(version), sql_url=CONF.sql_connection) else: commands.upgrade(to_version=str(version), sql_url=str(dburl)) history_description = "Show database changset history" @args('--db-url', '-d', metavar='', dest='dburl', help='barbican database URL') @args('--verbose', '-V', action='store_true', dest='verbose', default=False, help='Show full information about the revisions.') def history(self, dburl=None, verbose=None): if dburl is None: commands.history(verbose, sql_url=CONF.sql_connection) else: commands.history(verbose, sql_url=str(dburl)) current_description = "Show current revision of database" @args('--db-url', '-d', metavar='', dest='dburl', help='barbican database URL') @args('--verbose', '-V', action='store_true', dest='verbose', default=False, help='Show full information about the revisions.') def current(self, dburl=None, verbose=None): if dburl is None: commands.current(verbose, sql_url=CONF.sql_connection) else: commands.current(verbose, sql_url=str(dburl)) sync_secret_stores_description = "Sync secret_stores with barbican.conf" @args('--db-url', '-d', metavar='', dest='dburl', help='barbican database URL') @args('--verbose', '-V', action='store_true', dest='verbose', default=False, help='Show verbose information about the clean up.') @args('--log-file', '-L', metavar='', type=str, default=None, dest='log_file', help='Set log file location. ' 'Default value for log_file can be found in barbican.conf') def sync_secret_stores(self, dburl=None, verbose=None, log_file=None): """Sync secret_stores table with barbican.conf""" if dburl is None: dburl = CONF.sql_connection if log_file is None: log_file = CONF.log_file sync.sync_secret_stores( sql_url=dburl, verbose=verbose, log_file=log_file) class HSMCommands(object): """Class for managing HSM/pkcs11 plugin""" description = "Subcommands for managing HSM/PKCS11" gen_mkek_description = "Generates a new MKEK" @args('--library-path', metavar='', dest='libpath', default='/usr/lib/libCryptoki2_64.so', help='Path to vendor PKCS11 library') @args('--slot-id', metavar='', dest='slotid', default=1, help='HSM Slot id (Should correspond to a configured PKCS11 slot, \ default is 1)') @args('--passphrase', metavar='', default=None, required=True, help='Password to login to PKCS11 session') @args('--label', '-L', metavar='