networking-ovn-4.0.0/ 0000775 0001751 0001751 00000000000 13245511554 014533 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/ 0000775 0001751 0001751 00000000000 13245511554 017224 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/notes/ 0000775 0001751 0001751 00000000000 13245511554 020354 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/notes/maintenance-thread-ee65c1ad317204c7.yaml 0000666 0001751 0001751 00000000336 13245511145 027175 0 ustar zuul zuul 0000000 0000000 ---
features:
- |
Added a new mechanism that periodically detects and fix
inconsistencies between resources in the Neutron and OVN database.
upgrade:
- |
Adds a new dependency on the Oslo Futurist library.
networking-ovn-4.0.0/releasenotes/notes/ovn-native-nat-9bbc92f16edcf2f5.yaml 0000666 0001751 0001751 00000000436 13245511145 026553 0 ustar zuul zuul 0000000 0000000 ---
features:
- |
OVN native L3 implementation.
The native implementation supports distributed routing for east-west
traffic and centralized routing for north-south (floatingip and snat)
traffic. Also supported is the Neutron L3 Configurable external gateway
mode. networking-ovn-4.0.0/releasenotes/notes/ovsdb_connection-cef6b02c403163a3.yaml 0000666 0001751 0001751 00000000231 13245511145 026763 0 ustar zuul zuul 0000000 0000000 ---
deprecations:
- The ``ovn`` group ``ovsdb_connection`` configuration option was
deprecated in the ``Newton`` release and has now been removed.
networking-ovn-4.0.0/releasenotes/notes/networking-ovn-0df373f5a7b22d19.yaml 0000666 0001751 0001751 00000004360 13245511145 026443 0 ustar zuul zuul 0000000 0000000 ---
features:
- |
Initial release of the OpenStack Networking service (neutron)
integration with Open Virtual Network (OVN), a component of the
`Open vSwitch `_ project. OVN provides
the following features either via native implementation or
conventional agents:
* Layer-2 (native OVN implementation)
* Layer-3 (native OVN implementation or conventional layer-3 agent)
The native OVN implementation supports distributed routing. However,
it currently lacks support for floating IP addresses, NAT, and the
metadata proxy.
* DHCP (native OVN implementation or conventional DHCP agent)
The native implementation supports distributed DHCP. However,
it currently lacks support for IPv6, internal DNS, and metadata
proxy.
* Metadata (conventional metadata agent)
* DPDK - Usable with OVS via either the Linux kernel datapath
or the DPDK datapath.
* Trunk driver - Driver to back the neutron's 'trunk' service plugin
The initial release also supports the following Networking service
API extensions:
* ``agent``
* ``Address Scopes`` \*
* ``Allowed Address Pairs``
* ``Auto Allocated Topology Services``
* ``Availability Zone``
* ``Default Subnetpools``
* ``DHCP Agent Scheduler`` \*\*
* ``Distributed Virtual Router`` \*
* ``DNS Integration`` \*
* ``HA Router extension`` \*
* ``L3 Agent Scheduler`` \*
* ``Multi Provider Network``
* ``Network Availability Zone`` \*\*
* ``Network IP Availability``
* ``Neutron external network``
* ``Neutron Extra DHCP opts``
* ``Neutron Extra Route``
* ``Neutron L3 Configurable external gateway mode`` \*
* ``Neutron L3 Router``
* ``Network MTU``
* ``Port Binding``
* ``Port Security``
* ``Provider Network``
* ``Quality of Service``
* ``Quota management support``
* ``RBAC Policies``
* ``Resource revision numbers``
* ``Router Availability Zone`` \*
* ``security-group``
* ``standard-attr-description``
* ``Subnet Allocation``
* ``Tag support``
* ``Time Stamp Fields``
(\*) Only applicable if using the conventional layer-3 agent.
(\*\*) Only applicable if using the conventional DHCP agent.
networking-ovn-4.0.0/releasenotes/notes/ovn_dhcpv6-729158d634aa280e.yaml 0000666 0001751 0001751 00000000360 13245511145 025370 0 ustar zuul zuul 0000000 0000000 ---
features:
- |
OVN native DHCPv6 implementation.
The native implementation supports distributed DHCPv6. Support
Neutron IPv6 subnet whose "ipv6_address_mode" attribute is None,
"dhcpv6_stateless", or "dhcpv6_stateful".
networking-ovn-4.0.0/releasenotes/notes/bug-1606458-b9f809b3914bb203.yaml 0000666 0001751 0001751 00000000463 13245511145 025004 0 ustar zuul zuul 0000000 0000000 ---
deprecations:
- The ``ovn`` group ``vif_type`` configuration option is deprecated and
will be removed in the next release. The port VIF type is now determined
based on the OVN chassis information when the port is bound to a host.
[Bug `1606458 `_]
networking-ovn-4.0.0/releasenotes/notes/SRIOV-port-binding-support-bug-1515005.yaml 0000666 0001751 0001751 00000001002 13245511145 027626 0 ustar zuul zuul 0000000 0000000 ---
prelude: >
support for binding a SR-IOV port in a networking-ovn
deployment.
features:
- networking-ovn ML2 mechanism driver now supports binding
of direct(SR-IOV) ports. Traffic Control(TC) hardware
offload framework for SR-IOV VFs was introduced in Linux
kernel 4.8. Open vSwitch(OVS) 2.8 supports offloading OVS
datapath rules using the TC framework. By using OVS version
2.8 and Linux kernel >= 4.8, a SR-IOV VF can be controlled
via Openflow control plane.
networking-ovn-4.0.0/releasenotes/notes/distributed-fip-0f5915ef9fd00626.yaml 0000666 0001751 0001751 00000000351 13245511145 026476 0 ustar zuul zuul 0000000 0000000 ---
prelude: >
Support distributed floating IP.
features:
- |
Now distributed floating IP is supported and a new configuration option
``enable_distributed_floating_ip`` is added to ovn group to control
the feature.
networking-ovn-4.0.0/releasenotes/notes/.placeholder 0000666 0001751 0001751 00000000000 13245511145 022623 0 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/notes/ovsdb-ssl-support-213ff378777cf946.yaml 0000666 0001751 0001751 00000000343 13245511145 026766 0 ustar zuul zuul 0000000 0000000 ---
prelude: >
networking-ovn now supports the use of SSL for its
OVSDB connections to the OVN databases.
features:
- networking-ovn now supports the use of SSL for its
OVSDB connections to the OVN databases.
networking-ovn-4.0.0/releasenotes/notes/internal_dns_support-83737015a1019222.yaml 0000666 0001751 0001751 00000000161 13245511145 027331 0 ustar zuul zuul 0000000 0000000 ---
features:
- |
Use native OVN DNS support if "dns" extension is loaded and "dns_domain"
is defined.
networking-ovn-4.0.0/releasenotes/source/ 0000775 0001751 0001751 00000000000 13245511554 020524 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/source/newton.rst 0000666 0001751 0001751 00000000232 13245511164 022564 0 ustar zuul zuul 0000000 0000000 ===================================
Newton Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/newton
networking-ovn-4.0.0/releasenotes/source/_static/ 0000775 0001751 0001751 00000000000 13245511554 022152 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/source/_static/.placeholder 0000666 0001751 0001751 00000000000 13245511145 024421 0 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/source/pike.rst 0000666 0001751 0001751 00000000217 13245511145 022204 0 ustar zuul zuul 0000000 0000000 ===================================
Pike Series Release Notes
===================================
.. release-notes::
:branch: stable/pike
networking-ovn-4.0.0/releasenotes/source/conf.py 0000666 0001751 0001751 00000021647 13245511145 022033 0 ustar zuul zuul 0000000 0000000 # -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# OVN Release Notes documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 3 17:40:50 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'openstackdocstheme',
'reno.sphinxext',
]
# openstackdocstheme options
repository_name = 'openstack/networking-ovn'
bug_project = 'networking-ovn'
bug_tag = ''
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Networking OVN Release Notes'
copyright = u'2015, Networking OVN Developers'
# Release notes are version independent.
# The full version, including alpha/beta/rc tags.
release = ''
# The short X.Y version.
version = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'NetworkingOVNReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'NetworkingOVNReleaseNotes.tex',
u'Networking OVN Release Notes Documentation',
u'Networking OVN Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'networkingovnreleasenotes',
u'Networking OVN Release Notes Documentation',
[u'Networking OVN Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'NetworkingOVNReleaseNotes',
u'Networking OVN Release Notes Documentation',
u'Networking OVN Developers', 'NetworkingOVNReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
networking-ovn-4.0.0/releasenotes/source/locale/ 0000775 0001751 0001751 00000000000 13245511554 021763 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/source/locale/en_GB/ 0000775 0001751 0001751 00000000000 13245511554 022735 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/ 0000775 0001751 0001751 00000000000 13245511554 024522 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po 0000666 0001751 0001751 00000022302 13245511145 027550 0 ustar zuul zuul 0000000 0000000 # Andi Chandler , 2017. #zanata
msgid ""
msgstr ""
"Project-Id-Version: Networking OVN Release Notes\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-12-14 16:58+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2017-12-12 09:16+0000\n"
"Last-Translator: Andi Chandler \n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "(\\*) Only applicable if using the conventional layer-3 agent."
msgstr "(\\*) Only applicable if using the conventional layer-3 agent."
msgid "(\\*\\*) Only applicable if using the conventional DHCP agent."
msgstr "(\\*\\*) Only applicable if using the conventional DHCP agent."
msgid "1.0.0"
msgstr "1.0.0"
msgid "2.0.0"
msgstr "2.0.0"
msgid "3.0.0"
msgstr "3.0.0"
msgid "4.0.0.0b2"
msgstr "4.0.0.0b2"
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid ""
"DHCP (native OVN implementation or conventional DHCP agent) The native "
"implementation supports distributed DHCP. However, it currently lacks "
"support for IPv6, internal DNS, and metadata proxy."
msgstr ""
"DHCP (native OVN implementation or conventional DHCP agent) The native "
"implementation supports distributed DHCP. However, it currently lacks "
"support for IPv6, internal DNS, and metadata proxy."
msgid ""
"DPDK - Usable with OVS via either the Linux kernel datapath or the DPDK "
"datapath."
msgstr ""
"DPDK - Usable with OVS via either the Linux kernel datapath or the DPDK "
"datapath."
msgid "Deprecation Notes"
msgstr "Deprecation Notes"
msgid ""
"Initial release of the OpenStack Networking service (neutron) integration "
"with Open Virtual Network (OVN), a component of the the `Open vSwitch "
"`_ project. OVN provides the following features "
"either via native implementation or conventional agents:"
msgstr ""
"Initial release of the OpenStack Networking service (neutron) integration "
"with Open Virtual Network (OVN), a component of the the `Open vSwitch "
"`_ project. OVN provides the following features "
"either via native implementation or conventional agents:"
msgid "Layer-2 (native OVN implementation)"
msgstr "Layer-2 (native OVN implementation)"
msgid ""
"Layer-3 (native OVN implementation or conventional layer-3 agent) The native "
"OVN implementation supports distributed routing. However, it currently lacks "
"support for floating IP addresses, NAT, and the metadata proxy."
msgstr ""
"Layer-3 (native OVN implementation or conventional layer-3 agent) The native "
"OVN implementation supports distributed routing. However, it currently lacks "
"support for floating IP addresses, NAT, and the metadata proxy."
msgid "Metadata (conventional metadata agent)"
msgstr "Metadata (conventional metadata agent)"
msgid "Networking OVN Release Notes"
msgstr "Networking OVN Release Notes"
msgid "New Features"
msgstr "New Features"
msgid "Newton Series Release Notes"
msgstr "Newton Series Release Notes"
msgid ""
"Now distributed floating IP is supported and a new configuration option "
"``enable_distributed_floating_ip`` is added to ovn group to control the "
"feature."
msgstr ""
"Now distributed Floating IP is supported and a new configuration option "
"``enable_distributed_floating_ip`` is added to ovn group to control the "
"feature."
msgid ""
"OVN native DHCPv6 implementation. The native implementation supports "
"distributed DHCPv6. Support Neutron IPv6 subnet whose \"ipv6_address_mode\" "
"attribute is None, \"dhcpv6_stateless\", or \"dhcpv6_stateful\"."
msgstr ""
"OVN native DHCPv6 implementation. The native implementation supports "
"distributed DHCPv6. Support Neutron IPv6 subnet whose \"ipv6_address_mode\" "
"attribute is None, \"dhcpv6_stateless\", or \"dhcpv6_stateful\"."
msgid ""
"OVN native L3 implementation. The native implementation supports distributed "
"routing for east-west traffic and centralized routing for north-south "
"(floatingip and snat) traffic. Also supported is the Neutron L3 Configurable "
"external gateway mode."
msgstr ""
"OVN native L3 implementation. The native implementation supports distributed "
"routing for east-west traffic and centralised routing for north-south "
"(floatingip and snat) traffic. Also supported is the Neutron L3 Configurable "
"external gateway mode."
msgid "Ocata Series Release Notes"
msgstr "Ocata Series Release Notes"
msgid "Pike Series Release Notes"
msgstr "Pike Series Release Notes"
msgid "Prelude"
msgstr "Prelude"
msgid "Support distributed floating IP."
msgstr "Support distributed Floating IP."
msgid ""
"The ``ovn`` group ``ovsdb_connection`` configuration option was deprecated "
"in the ``Newton`` release and has now been removed."
msgstr ""
"The ``ovn`` group ``ovsdb_connection`` configuration option was deprecated "
"in the ``Newton`` release and has now been removed."
msgid ""
"The ``ovn`` group ``vif_type`` configuration option is deprecated and will "
"be removed in the next release. The port VIF type is now determined based on "
"the OVN chassis information when the port is bound to a host. [Bug `1606458 "
"`_]"
msgstr ""
"The ``ovn`` group ``vif_type`` configuration option is deprecated and will "
"be removed in the next release. The port VIF type is now determined based on "
"the OVN chassis information when the port is bound to a host. [Bug `1606458 "
"`_]"
msgid ""
"The initial release also supports the following Networking service API "
"extensions:"
msgstr ""
"The initial release also supports the following Networking service API "
"extensions:"
msgid "Trunk driver - Driver to back the neutron's 'trunk' service plugin"
msgstr "Trunk driver - Driver to back the neutron's 'trunk' service plugin"
msgid "``Address Scopes`` \\*"
msgstr "``Address Scopes`` \\*"
msgid "``Allowed Address Pairs``"
msgstr "``Allowed Address Pairs``"
msgid "``Auto Allocated Topology Services``"
msgstr "``Auto Allocated Topology Services``"
msgid "``Availability Zone``"
msgstr "``Availability Zone``"
msgid "``DHCP Agent Scheduler`` \\*\\*"
msgstr "``DHCP Agent Scheduler`` \\*\\*"
msgid "``DNS Integration`` \\*"
msgstr "``DNS Integration`` \\*"
msgid "``Default Subnetpools``"
msgstr "``Default Subnetpools``"
msgid "``Distributed Virtual Router`` \\*"
msgstr "``Distributed Virtual Router`` \\*"
msgid "``HA Router extension`` \\*"
msgstr "``HA Router extension`` \\*"
msgid "``L3 Agent Scheduler`` \\*"
msgstr "``L3 Agent Scheduler`` \\*"
msgid "``Multi Provider Network``"
msgstr "``Multi Provider Network``"
msgid "``Network Availability Zone`` \\*\\*"
msgstr "``Network Availability Zone`` \\*\\*"
msgid "``Network IP Availability``"
msgstr "``Network IP Availability``"
msgid "``Network MTU``"
msgstr "``Network MTU``"
msgid "``Neutron Extra DHCP opts``"
msgstr "``Neutron Extra DHCP opts``"
msgid "``Neutron Extra Route``"
msgstr "``Neutron Extra Route``"
msgid "``Neutron L3 Configurable external gateway mode`` \\*"
msgstr "``Neutron L3 Configurable external gateway mode`` \\*"
msgid "``Neutron L3 Router``"
msgstr "``Neutron L3 Router``"
msgid "``Neutron external network``"
msgstr "``Neutron external network``"
msgid "``Port Binding``"
msgstr "``Port Binding``"
msgid "``Port Security``"
msgstr "``Port Security``"
msgid "``Provider Network``"
msgstr "``Provider Network``"
msgid "``Quality of Service``"
msgstr "``Quality of Service``"
msgid "``Quota management support``"
msgstr "``Quota management support``"
msgid "``RBAC Policies``"
msgstr "``RBAC Policies``"
msgid "``Resource revision numbers``"
msgstr "``Resource revision numbers``"
msgid "``Router Availability Zone`` \\*"
msgstr "``Router Availability Zone`` \\*"
msgid "``Subnet Allocation``"
msgstr "``Subnet Allocation``"
msgid "``Tag support``"
msgstr "``Tag support``"
msgid "``Time Stamp Fields``"
msgstr "``Time Stamp Fields``"
msgid "``agent``"
msgstr "``agent``"
msgid "``security-group``"
msgstr "``security-group``"
msgid "``standard-attr-description``"
msgstr "``standard-attr-description``"
msgid ""
"networking-ovn ML2 mechanism driver now supports binding of direct(SR-IOV) "
"ports. Traffic Control(TC) hardware offload framework for SR-IOV VFs was "
"introduced in Linux kernel 4.8. Open vSwitch(OVS) 2.8 supports offloading "
"OVS datapath rules using the TC framework. By using OVS version 2.8 and "
"Linux kernel >= 4.8, a SR-IOV VF can be controlled via Openflow control "
"plane."
msgstr ""
"networking-ovn ML2 mechanism driver now supports binding of direct(SR-IOV) "
"ports. Traffic Control(TC) hardware offload framework for SR-IOV VFs was "
"introduced in Linux kernel 4.8. Open vSwitch(OVS) 2.8 supports offloading "
"OVS datapath rules using the TC framework. By using OVS version 2.8 and "
"Linux kernel >= 4.8, a SR-IOV VF can be controlled via Openflow control "
"plane."
msgid ""
"networking-ovn now supports the use of SSL for its OVSDB connections to the "
"OVN databases."
msgstr ""
"networking-ovn now supports the use of SSL for its OVSDB connections to the "
"OVN databases."
msgid "support for binding a SR-IOV port in a networking-ovn deployment."
msgstr "support for binding a SR-IOV port in a networking-ovn deployment."
networking-ovn-4.0.0/releasenotes/source/unreleased.rst 0000666 0001751 0001751 00000000156 13245511145 023405 0 ustar zuul zuul 0000000 0000000 =============================
Current Series Release Notes
=============================
.. release-notes::
networking-ovn-4.0.0/releasenotes/source/index.rst 0000666 0001751 0001751 00000000244 13245511145 022363 0 ustar zuul zuul 0000000 0000000 ==============================
Networking OVN Release Notes
==============================
.. toctree::
:maxdepth: 1
unreleased
pike
ocata
newton
networking-ovn-4.0.0/releasenotes/source/ocata.rst 0000666 0001751 0001751 00000000264 13245511164 022346 0 ustar zuul zuul 0000000 0000000 ===================================
Ocata Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/ocata
:earliest-version: 2.0.0
networking-ovn-4.0.0/releasenotes/source/_templates/ 0000775 0001751 0001751 00000000000 13245511554 022661 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/releasenotes/source/_templates/.placeholder 0000666 0001751 0001751 00000000000 13245511145 025130 0 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/ 0000775 0001751 0001751 00000000000 13245511554 015300 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/ 0000775 0001751 0001751 00000000000 13245511554 016600 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/configuration/ 0000775 0001751 0001751 00000000000 13245511554 021447 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/configuration/networking_ovn_metadata_agent.rst 0000666 0001751 0001751 00000000306 13245511145 030265 0 ustar zuul zuul 0000000 0000000 =================================
networking_ovn_metadata_agent.ini
=================================
.. show-options::
:config-file: etc/oslo-config-generator/networking_ovn_metadata_agent.ini
networking-ovn-4.0.0/doc/source/configuration/ml2_conf.rst 0000666 0001751 0001751 00000000162 13245511145 023675 0 ustar zuul zuul 0000000 0000000 ============
ml2_conf.ini
============
.. show-options::
:config-file: etc/oslo-config-generator/ml2_conf.ini
networking-ovn-4.0.0/doc/source/configuration/index.rst 0000666 0001751 0001751 00000001136 13245511145 023307 0 ustar zuul zuul 0000000 0000000 =====================
Configuration Options
=====================
This section provides a list of all possible options for each
configuration file.
Configuration Reference
-----------------------
networking-ovn uses the following configuration files for its various services.
.. toctree::
:glob:
:maxdepth: 1
*
Sample Configuration Files
--------------------------
The following are sample configuration files for all networking-ovn.
These are generated from code and reflect the current state of code
in the networking-ovn repository.
.. toctree::
:glob:
:maxdepth: 1
samples/*
networking-ovn-4.0.0/doc/source/configuration/samples/ 0000775 0001751 0001751 00000000000 13245511554 023113 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/configuration/samples/networking_ovn_metadata_agent.rst 0000666 0001751 0001751 00000000547 13245511145 031740 0 ustar zuul zuul 0000000 0000000 ========================================
Sample networking_ovn_metadata_agent.ini
========================================
This sample configuration can also be viewed in `the raw format
<../../_static/config_samples/networking_ovn_metadata_agent.conf.sample>`_.
.. literalinclude::
../../_static/config_samples/networking_ovn_metadata_agent.conf.sample
networking-ovn-4.0.0/doc/source/configuration/samples/ml2_conf.rst 0000666 0001751 0001751 00000000376 13245511145 025350 0 ustar zuul zuul 0000000 0000000 ===================
Sample ml2_conf.ini
===================
This sample configuration can also be viewed in `the raw format
<../../_static/config_samples/ml2_conf.conf.sample>`_.
.. literalinclude::
../../_static/config_samples/ml2_conf.conf.sample
networking-ovn-4.0.0/doc/source/_static/ 0000775 0001751 0001751 00000000000 13245511554 020226 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/_static/.placeholder 0000666 0001751 0001751 00000000000 13245511145 022475 0 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/conf.py 0000666 0001751 0001751 00000006446 13245511145 020107 0 ustar zuul zuul 0000000 0000000 # -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'openstackdocstheme',
'oslo_config.sphinxext',
'oslo_config.sphinxconfiggen',
]
# openstackdocstheme options
repository_name = 'openstack/networking-ovn'
bug_project = 'networking-ovn'
bug_tag = ''
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'networking-ovn'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
html_static_path = ['_static']
html_theme = 'openstackdocs'
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}
# -- Options for oslo_config.sphinxconfiggen ---------------------------------
_config_generator_config_files = [
'ml2_conf.ini',
'networking_ovn_metadata_agent.ini',
]
def _get_config_generator_config_definition(config_file):
config_file_path = '../../etc/oslo-config-generator/%s' % conf
# oslo_config.sphinxconfiggen appends '.conf.sample' to the filename,
# strip file extentension (.conf or .ini).
output_file_path = '_static/config_samples/%s' % conf.rsplit('.', 1)[0]
return (config_file_path, output_file_path)
config_generator_config_file = [
_get_config_generator_config_definition(conf)
for conf in _config_generator_config_files
]
networking-ovn-4.0.0/doc/source/contributor/ 0000775 0001751 0001751 00000000000 13245511554 021152 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/contributor/design/ 0000775 0001751 0001751 00000000000 13245511554 022423 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/contributor/design/native_dhcp.rst 0000666 0001751 0001751 00000004155 13245511145 025444 0 ustar zuul zuul 0000000 0000000 Using the native DHCP feature provided by OVN
=============================================
DHCPv4
------
OVN implements a native DHCPv4 support which caters to the common use case of
providing an IP address to a booting instance by providing stateless replies to
DHCPv4 requests based on statically configured address mappings. To do this it
allows a short list of DHCPv4 options to be configured and applied at each
compute host running ovn-controller.
OVN northbound db provides a table 'DHCP_Options' to store the DHCP options.
Logical switch port has a reference to this table.
When a subnet is created and enable_dhcp is True, a new entry is created in
this table. The 'options' column stores the DHCPv4 options. These DHCPv4
options are included in the DHCPv4 reply by the ovn-controller when the VIF
attached to the logical switch port sends a DHCPv4 request.
In order to map the DHCP_Options row with the subnet, the OVN ML2 driver
stores the subnet id in the 'external_ids' column.
When a new port is created, the 'dhcpv4_options' column of the logical switch
port refers to the DHCP_Options row created for the subnet of the port.
If the port has multiple IPv4 subnets, then the first subnet in the 'fixed_ips'
is used.
If the port has extra DHCPv4 options defined, then a new entry is created
in the DHCP_Options table for the port. The default DHCP options are obtained
from the subnet DHCP_Options table and the extra DHCPv4 options of the port
are overridden. In order to map the port DHCP_Options row with the port,
the OVN ML2 driver stores both the subnet id and port id in the 'external_ids'
column.
If admin wants to disable native OVN DHCPv4 for any particular port, then the
admin needs to define the 'dhcp_disabled' with the value 'true' in the extra
DHCP options.
Ex. neutron port-update \
--extra-dhcp-opt ip_version=4, opt_name=dhcp_disabled, opt_value=false
DHCPv6
------
OVN implements a native DHCPv6 support similar to DHCPv4. When a v6 subnet is
created, the OVN ML2 driver will insert a new entry into DHCP_Options table
only when the subnet 'ipv6_address_mode' is not 'slaac', and enable_dhcp is
True.
networking-ovn-4.0.0/doc/source/contributor/design/database_consistency.rst 0000666 0001751 0001751 00000044251 13245511164 027347 0 ustar zuul zuul 0000000 0000000 ================================
Neutron/OVN Database consistency
================================
This document presents the problem and proposes a solution for the data
consistency issue between the Neutron and OVN databases. Although the
focus of this document is OVN this problem is common enough to be present
in other ML2 drivers (e.g OpenDayLight, BigSwitch, etc...). Some of them
already contain a mechanism in place for dealing with it.
Problem description
===================
In a common Neutron deployment model there could have multiple Neutron
API workers processing requests. For each request, the worker will update
the Neutron database and then invoke the ML2 driver to translate the
information to that specific SDN data model.
There are at least two situations that could lead to some inconsistency
between the Neutron and the SDN databases, for example:
.. _problem_1:
Problem 1: Neutron API workers race condition
---------------------------------------------
.. code-block:: python
In Neutron:
with neutron_db_transaction:
update_neutron_db()
ml2_driver.update_port_precommit()
ml2_driver.update_port_postcommit()
In the ML2 driver:
def update_port_postcommit:
port = neutron_db.get_port()
update_port_in_ovn(port)
Imagine the case where a port is being updated twice and each request
is being handled by a different API worker. The method responsible for
updating the resource in the OVN (``update_port_postcommit``) is not
atomic and invoked outside of the Neutron database transaction. This could
lead to a problem where the order in which the updates are committed to
the Neutron database are different than the order that they are committed
to the OVN database, resulting in an inconsistency.
This problem has been reported at `bug #1605089
`_.
.. _problem_2:
Problem 2: Backend failures
---------------------------
Another situation is when the changes are already committed in Neutron
but an exception is raised upon trying to update the OVN database (e.g
lost connectivity to the ``ovsdb-server``). We currently don't have a
good way of handling this problem, obviously it would be possible to try
to immediately rollback the changes in the Neutron database and raise an
exception but, that rollback itself is an operation that could also fail.
Plus, rollbacks is not very straight forward when it comes to updates
or deletes. In a case where a VM is being teared down and OVN fail to
delete a port, re-creating that port in Neutron doesn't necessary fix the
problem. The decommission of a VM involves many other things, in fact, we
could make things even worse by leaving some dirty data around. I believe
this is a problem that would be better dealt with by other methods.
Proposed change
===============
In order to fix the problems presented at the `Problem description`_
section this document proposes a solution based on the Neutron's
``revision_number`` attribute. In summary, for every resource in Neutron
there's an attribute called ``revision_number`` which gets incremented
on each update made on that resource. For example::
$ openstack port create --network nettest porttest
...
| revision_number | 2 |
...
$ openstack port set porttest --mac-address 11:22:33:44:55:66
$ mysql -e "use neutron; select standard_attr_id from ports where id=\"91c08021-ded3-4c5a-8d57-5b5c389f8e39\";"
+------------------+
| standard_attr_id |
+------------------+
| 1427 |
+------------------+
$ mysql -e "use neutron; SELECT revision_number FROM standardattributes WHERE id=1427;"
+-----------------+
| revision_number |
+-----------------+
| 3 |
+-----------------+
This document proposes a solution that will use the `revision_number`
attribute for three things:
#. Perform a compare-and-swap operation based on the resource version
#. Guarantee the order of the updates (`Problem 1 `_)
#. Detecting when resources in Neutron and OVN are out-of-sync
But, before any of points above can be done we need to change the
networking-ovn code to:
#1 - Store the revision_number referent to a change in OVNDB
------------------------------------------------------------
To be able to compare the version of the resource in Neutron against
the version in OVN we first need to know which version the OVN resource
is present at.
Fortunately, each table in the OVNDB contains a special column called
``external_ids`` which external systems (like Neutron/networking-ovn)
can use to store information about its own resources that corresponds
to the entries in OVNDB.
So, every time a resource is created or updated in OVNDB by
networking-ovn, the Neutron ``revision_number`` referent to that change
will be stored in the ``external_ids`` column of that resource. That
will allow networking-ovn to look at both databases and detect whether
the version in OVN is up-to-date with Neutron or not.
#2 - Ensure correctness when updating OVN
-----------------------------------------
As stated in `Problem 1 `_, simultaneous updates to a single
resource will race and, with the current code, the order in which these
updates are applied is not guaranteed to be the correct order. That
means that, if two or more updates arrives we can't prevent an older
version of that update to be applied after a newer one.
This document proposes creating a special ``OVSDB command`` that runs
as part of the same transaction that is updating a resource in OVNDB to
prevent changes with a lower ``revision_number`` to be applied in case
the resource in OVN is at a higher ``revision_number`` already.
This new OVSDB command needs to basically do two things:
1. Add a verify operation to the ``external_ids`` column in OVNDB so
that if another client modifies that column mid-operation the transaction
will be restarted.
A better explanation of what "verify" does is described at the doc string
of the `Transaction class`_ in the OVS code itself, I quote:
Because OVSDB handles multiple clients, it can happen that between
the time that OVSDB client A reads a column and writes a new value,
OVSDB client B has written that column. Client A's write should not
ordinarily overwrite client B's, especially if the column in question
is a "map" column that contains several more or less independent data
items. If client A adds a "verify" operation before it writes the
column, then the transaction fails in case client B modifies it first.
Client A will then see the new value of the column and compose a new
transaction based on the new contents written by client B.
2. Compare the ``revision_number`` from the update against what is
presently stored in OVNDB. If the version in OVNDB is already higher
than the version in the update, abort the transaction.
So basically this new command is responsible for guarding the OVN resource
by not allowing old changes to be applied on top of new ones. Here's a
scenario where two concurrent updates comes in the wrong order and how
the solution above will deal with it:
Neutron worker 1 (NW-1): Updates a port with address A (revision_number: 2)
Neutron worker 2 (NW-2): Updates a port with address B (revision_number: 3)
TXN 1: NW-2 transaction is committed first and the OVN resource now has RN 3
TXN 2: NW-1 transaction detects the change in the external_ids column and
is restarted
TXN 2: NW-1 the new command now sees that the OVN resource is at RN 3,
which is higher than the update version (RN 2) and aborts the transaction.
There's a bit more for the above to work with the current networking-ovn
code, basically we need to tidy up the code to do two more things.
1. Consolidate changes to a resource in a single transaction.
This is important regardless of this spec, having all changes to a
resource done in a single transaction minimizes the risk of having
half-changes written to the database in case of an eventual problem. This
`should be done already `_
but it's important to have it here in case we find more examples like
that as we code.
2. When doing partial updates, use the OVNDB as the source of comparison
to create the deltas.
Being able to do a partial update in a resource is important for
performance reasons; it's a way to minimize the number of changes that
will be performed in the database.
Right now, some of the update() methods in networking-ovn creates the
deltas using the *current* and *original* parameters that are passed to
it. The *current* parameter is, as the name says, the current version
of the object present in the Neutron DB. The *original* parameter is
the previous version (current - 1) of that object.
The problem of creating the deltas by comparing these two objects is
because only the data in the Neutron DB is used for it. We need to stop
using the *original* object for it and instead we should create the
delta based on the *current* version of the Neutron DB against the data
stored in the OVNDB to be able to detect the real differences between
the two databases.
So in summary, to guarantee the correctness of the updates this document
proposes to:
#. Create a new OVSDB command is responsible for comparing revision
numbers and aborting the transaction, when needed.
#. Consolidate changes to a resource in a single transaction (should be
done already)
#. When doing partial updates, create the deltas based in the current
version in the Neutron DB and the OVNDB.
#3 - Detect and fix out-of-sync resources
-----------------------------------------
When things are working as expected the above changes should ensure
that Neutron DB and OVNDB are in sync but, what happens when things go
bad ? As per `Problem 2 `_, things like temporarily losing
connectivity with the OVNDB could cause changes to fail to be committed
and the databases getting out-of-sync. We need to be able to detect the
resources that were affected by these failures and fix them.
We do already have the means to do it, similar to what the
`ovn_db_sync.py`_ script does we could fetch all the data from both
databases and compare each resource. But, depending on the size of the
deployment this can be really slow and costy.
This document proposes an optimization for this problem to make it
efficient enough so that we can run it periodically (as a periodic task)
and not manually as a script anymore.
First, we need to create an additional table in the Neutron database
that would serve as a cache for the revision numbers in **OVNDB**.
The new table schema could look this:
================ ======== ===========
Column name Type Description
================ ======== ===========
standard_attr_id Integer Primary key. The reference ID from the
standardattributes table in Neutron for
that resource. ONDELETE SET NULL.
resource_uuid String The UUID of the resource
resource_type String The type of the resource (e.g, Port, Router, ...)
revision_number Integer The version of the object present in OVN
acquired_at DateTime The time that the entry was create. For
troubleshooting purposes
updated_at DateTime The time that the entry was updated. For
troubleshooting purposes
================ ======== ===========
For the different actions: Create, update and delete; this table will be
used as:
1. Create:
In the create_*_precommit() method, we will create an entry in the new
table within the same Neutron transaction. The revision_number column
for the new entry will have a placeholder value until the resource is
successfully created in OVNDB.
In case we fail to create the resource in OVN (but succeed in Neutron)
we still have the entry logged in the new table and this problem can
be detected by fetching all resources where the revision_number column
value is equal to the placeholder value.
The pseudo-code will look something like this:
.. code-block:: python
def create_port_precommit(ctx, port):
create_initial_revision(port['id'], revision_number=-1,
session=ctx.session)
def create_port_postcommit(ctx, port):
create_port_in_ovn(port)
bump_revision(port['id'], revision_number=port['revision_number'])
2. Update:
For update it's simpler, we need to bump the revision number for
that resource **after** the OVN transaction is committed in the
update_*_postcommit() method. That way, if an update fails to be applied
to OVN the inconsistencies can be detected by a JOIN between the new
table and the ``standardattributes`` table where the revision_number
columns does not match.
The pseudo-code will look something like this:
.. code-block:: python
def update_port_postcommit(ctx, port):
update_port_in_ovn(port)
bump_revision(port['id'], revision_number=port['revision_number'])
3. Delete:
The ``standard_attr_id`` column in the new table is a foreign key
constraint with a ``ONDELETE=SET NULL`` set. That means that, upon
Neutron deleting a resource the ``standard_attr_id`` column in the new
table will be set to *NULL*.
If deleting a resource succeeds in Neutron but fails in OVN, the
inconsistency can be detect by looking at all resources that has a
``standard_attr_id`` equals to NULL.
The pseudo-code will look something like this:
.. code-block:: python
def delete_port_postcommit(ctx, port):
delete_port_in_ovn(port)
delete_revision(port['id'])
With the above optimization it's possible to create a periodic task that
can run quite frequently to detect and fix the inconsistencies caused
by random backend failures.
.. note::
There's no lock linking both database updates in the postcommit()
methods. So, it's true that the method bumping the revision_number
column in the new table in Neutron DB could still race but, that
should be fine because this table acts like a cache and the real
revision_number has been written in OVNDB.
The mechanism that will detect and fix the out-of-sync resources should
detect this inconsistency as well and, based on the revision_number
in OVNDB, decide whether to sync the resource or only bump the
revision_number in the cache table (in case the resource is already
at the right version).
Refereces
=========
* There's a chain of patches with a proof of concept for this approach,
they start at: https://review.openstack.org/#/c/517049/
Alternatives
============
Journaling
----------
An alternative solution to this problem is *journaling*. The basic
idea is to create another table in the Neutron database and log every
operation (create, update and delete) instead of passing it directly to
the SDN controller.
A separated thread (or multiple instances of it) is then responsible
for reading this table and applying the operations to the SDN backend.
This approach has been used and validated
by drivers such as `networking-odl
`_.
An attempt to implement this approach
in *networking-ovn* can be found `here
`_.
Some things to keep in mind about this approach:
* The code can get quite complex as this approach is not only about
applying the changes to the SDN backend asynchronously. The dependencies
between each resource as well as their operations also needs to be
computed. For example, before attempting to create a router port the
router that this port belongs to needs to be created. Or, before
attempting to delete a network all the dependent resources on it
(subnets, ports, etc...) needs to be processed first.
* The number of journal threads running can cause problems. In my tests
I had three controllers, each one with 24 CPU cores (Intel Xeon E5-2620
with hyperthreading enabled) and 64GB RAM. Running 1 journal thread
per Neutron API worker has caused ``ovsdb-server`` to misbehave
when under heavy pressure [1]_. Running multiple journal threads
seem to be causing other types of problems `in other drivers as well
`_.
* When under heavy pressure [1]_, I noticed that the journal
threads could come to a halt (or really slowed down) while the
API workers were handling a lot of requests. This resulted in some
operations taking more than a minute to be processed. This behaviour
can be seem `in this screenshot `_.
.. TODO find a better place to host that image
* Given that the 1 journal thread per Neutron API worker approach
is problematic, determining the right number of journal threads is
also difficult. In my tests, I've noticed that 3 journal threads
per controller worked better but that number was pure based on
``trial & error``. In production this number should probably be
calculated based in the environment, perhaps something like `TripleO
`_ (or any upper layer) would be in a better
position to make that decision.
* At least temporarily, the data in the Neutron database is duplicated
between the normal tables and the journal one.
* Some operations like creating a new
resource via Neutron's API will return `HTTP 201
`_,
which indicates that the resource has been created and is ready to
be used, but as these resources are created asynchronously one could
argue that the HTTP codes are now misleading. As a note, the resource
will be created at the Neutron database by the time the HTTP request
returns but it may not be present in the SDN backend yet.
Given all considerations, this approach is still valid and the fact
that it's already been used by other ML2 drivers makes it more open for
collaboration and code sharing.
.. _`Transaction class`: https://github.com/openvswitch/ovs/blob/3728b3b0316b44d1f9181be115b63ea85ff5883c/python/ovs/db/idl.py#L1014-L1055
.. _`ovn_db_sync.py`: https://github.com/openstack/networking-ovn/blob/a9af75cd3ce6cd6685b6435b325c97cacc83ce0e/networking_ovn/ovn_db_sync.py
.. rubric:: Footnotes
.. [1] I ran the tests using `Browbeat
`_ which is basically orchestrate
`Openstack Rally `_ and monitor the
machine's usage of resources.
networking-ovn-4.0.0/doc/source/contributor/design/data_model.rst 0000666 0001751 0001751 00000016077 13245511145 025257 0 ustar zuul zuul 0000000 0000000 Mapping between Neutron and OVN data models
========================================================
The primary job of the Neutron OVN ML2 driver is to translate requests for
resources into OVN's data model. Resources are created in OVN by updating the
appropriate tables in the OVN northbound database (an ovsdb database). This
document looks at the mappings between the data that exists in Neutron and what
the resulting entries in the OVN northbound DB would look like.
Network
----------
::
Neutron Network:
id
name
subnets
admin_state_up
status
tenant_id
Once a network is created, we should create an entry in the Logical Switch
table.
::
OVN northbound DB Logical Switch:
external_ids: {
'neutron:network_name': network.name
}
Subnet
---------
::
Neutron Subnet:
id
name
ip_version
network_id
cidr
gateway_ip
allocation_pools
dns_nameservers
host_routers
tenant_id
enable_dhcp
ipv6_ra_mode
ipv6_address_mode
Once a subnet is created, we should create an entry in the DHCP Options table
with the DHCPv4 or DHCPv6 options.
::
OVN northbound DB DHCP_Options:
cidr
options
external_ids: {
'subnet_id': subnet.id
}
Port
-------
::
Neutron Port:
id
name
network_id
admin_state_up
mac_address
fixed_ips
device_id
device_owner
tenant_id
status
When a port is created, we should create an entry in the Logical Switch Ports
table in the OVN northbound DB.
::
OVN Northbound DB Logical Switch Port:
switch: reference to OVN Logical Switch
router_port: (empty)
name: port.id
up: (read-only)
macs: [port.mac_address]
port_security:
external_ids: {'neutron:port_name': port.name}
If the port has extra DHCP options defined, we should create an entry
in the DHCP Options table in the OVN northbound DB.
::
OVN northbound DB DHCP_Options:
cidr
options
external_ids: {
'subnet_id': subnet.id,
'port_id': port.id
}
Router
----------
::
Neutron Router:
id
name
admin_state_up
status
tenant_id
external_gw_info:
network_id
external_fixed_ips: list of dicts
ip_address
subnet_id
...
::
OVN Northbound DB Logical Router:
ip:
default_gw:
external_ids:
Router Port
--------------
...
::
OVN Northbound DB Logical Router Port:
router: (reference to Logical Router)
network: (reference to network this port is connected to)
mac:
external_ids:
Security Groups
----------------
::
Neutron Port:
id
security_group: id
network_id
Neutron Security Group
id
name
tenant_id
security_group_rules
Neutron Security Group Rule
id
tenant_id
security_group_id
direction
remote_group_id
ethertype
protocol
port_range_min
port_range_max
remote_ip_prefix
...
::
OVN Northbound DB ACL Rule:
lswitch: (reference to Logical Switch - port.network_id)
priority: (0..65535)
match: boolean expressions according to security rule
Translation map (sg_rule ==> match expression)
-----------------------------------------------
sg_rule.direction="Ingress" => "inport=port.id"
sg_rule.direction="Egress" => "outport=port.id"
sg_rule.ethertype => "eth.type"
sg_rule.protocol => "ip.proto"
sg_rule.port_range_min/port_range_max =>
"port_range_min <= tcp.src <= port_range_max"
"port_range_min <= udp.src <= port_range_max"
sg_rule.remote_ip_prefix => "ip4.src/mask, ip4.dst/mask, ipv6.src/mask, ipv6.dst/mask"
(all match options for ACL can be found here:
http://openvswitch.org/support/dist-docs/ovn-nb.5.html)
action: "allow-related"
log: true/false
external_ids: {'neutron:port_id': port.id}
{'neutron:security_rule_id': security_rule.id}
Security groups maps between three neutron objects to one OVN-NB object, this
enable us to do the mapping in various ways, depending on OVN capabilities
The current implementation will use the first option in this list for
simplicity, but all options are kept here for future reference
1) For every pair, define an ACL entry::
Leads to many ACL entries.
acl.match = sg_rule converted
example: ((inport==port.id) && (ip.proto == "tcp") &&
(1024 <= tcp.src <= 4095) && (ip.src==192.168.0.1/16))
external_ids: {'neutron:port_id': port.id}
{'neutron:security_rule_id': security_rule.id}
2) For every pair, define an ACL entry::
Reduce the number of ACL entries.
Means we have to manage the match field in case specific rule changes
example: (((inport==port.id) && (ip.proto == "tcp") &&
(1024 <= tcp.src <= 4095) && (ip.src==192.168.0.1/16)) ||
((outport==port.id) && (ip.proto == "udp") && (1024 <= tcp.src <= 4095)) ||
((inport==port.id) && (ip.proto == 6) ) ||
((inport==port.id) && (eth.type == 0x86dd)))
(This example is a security group with four security rules)
external_ids: {'neutron:port_id': port.id}
{'neutron:security_group_id': security_group.id}
3) For every pair, define an ACL entry::
Reduce even more the number of ACL entries.
Manage complexity increase
example: (((inport==port.id) && (ip.proto == "tcp") && (1024 <= tcp.src <= 4095)
&& (ip.src==192.168.0.1/16)) ||
((outport==port.id) && (ip.proto == "udp") && (1024 <= tcp.src <= 4095)) ||
((inport==port.id) && (ip.proto == 6) ) ||
((inport==port.id) && (eth.type == 0x86dd))) ||
(((inport==port2.id) && (ip.proto == "tcp") && (1024 <= tcp.src <= 4095)
&& (ip.src==192.168.0.1/16)) ||
((outport==port2.id) && (ip.proto == "udp") && (1024 <= tcp.src <= 4095)) ||
((inport==port2.id) && (ip.proto == 6) ) ||
((inport==port2.id) && (eth.type == 0x86dd)))
external_ids: {'neutron:security_group': security_group.id}
Which option to pick depends on OVN match field length capabilities, and the
trade off between better performance due to less ACL entries compared to the
complexity to manage them.
If the default behaviour is not "drop" for unmatched entries, a rule with
lowest priority must be added to drop all traffic ("match==1")
Spoofing protection rules are being added by OVN internally and we need to
ignore the automatically added rules in Neutron
networking-ovn-4.0.0/doc/source/contributor/design/metadata_api.rst 0000666 0001751 0001751 00000036630 13245511164 025575 0 ustar zuul zuul 0000000 0000000 OpenStack Metadata API and OVN
==============================
Introduction
------------
OpenStack Nova presents a metadata API to VMs similar to what is available on
Amazon EC2. Neutron is involved in this process because the source IP address
is not enough to uniquely identify the source of a metadata request since
networks can have overlapping IP addresses. Neutron is responsible for
intercepting metadata API requests and adding HTTP headers which uniquely
identify the source of the request before forwarding it to the metadata API
server.
The purpose of this document is to propose a design for how to enable this
functionality when OVN is used as the backend for OpenStack Neutron.
Neutron and Metadata Today
--------------------------
The following blog post describes how VMs access the metadata API through
Neutron today.
https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/
In summary, we run a metadata proxy in either the router namespace or DHCP
namespace. The DHCP namespace can be used when there’s no router connected to
the network. The one downside to the DHCP namespace approach is that it
requires pushing a static route to the VM through DHCP so that it knows to
route metadata requests to the DHCP server IP address.
* Instance sends a HTTP request for metadata to 169.254.169.254
* This request either hits the router or DHCP namespace depending on the route
in the instance
* The metadata proxy service in the namespace adds the following info to the
request:
* Instance IP (X-Forwarded-For header)
* Router or Network-ID (X-Neutron-Network-Id or X-Neutron-Router-Id header)
* The metadata proxy service sends this request to the metadata agent (outside
the namespace) via a UNIX domain socket.
* The neutron-metadata-agent service forwards the request to the Nova metadata
API service by adding some new headers (instance ID and Tenant ID) to the
request [0].
For proper operation, Neutron and Nova must be configured to communicate
together with a shared secret. Neutron uses this secret to sign the Instance-ID
header of the metadata request to prevent spoofing. This secret is configured
through metadata_proxy_shared_secret on both nova and neutron configuration
files (optional).
[0] https://github.com/openstack/neutron/blob/master/neutron/agent/metadata/agent.py#L167
Neutron and Metadata with OVN
-----------------------------
The current metadata API approach does not translate directly to OVN. There
are no Neutron agents in use with OVN. Further, OVN makes no use of its own
network namespaces that we could take advantage of like the original
implementation makes use of the router and dhcp namespaces.
We must use a modified approach that fits the OVN model. This section details
a proposed approach.
Overview of Proposed Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The proposed approach would be similar to the *isolated network* case in the
current ML2+OVS implementation. Therefore, we would be running a metadata
proxy (haproxy) instance on every hypervisor for each network a VM on that
host is connected to.
The downside of this approach is that we'll be running more metadata proxies
than we're doing now in case of routed networks (one per virtual router) but
since haproxy is very lightweight and they will be idling most of the time,
it shouldn't be a big issue overall. However, the major benefit of this
approach is that we don't have to implement any scheduling logic to distribute
metadata proxies across the nodes, nor any HA logic. This, however, can be
evolved in the future as explained below in this document.
Also, this approach relies on a new feature in OVN that we must implement
first so that an OVN port can be present on *every* chassis (similar to
*localnet* ports). This new type of logical port would be *localport* and we
will never forward packets over a tunnel for these ports. We would only send
packets to the local instance of a *localport*.
**Step 1** - Create a port for the metadata proxy
When using the DHCP agent today, Neutron automatically creates a port for the
DHCP agent to use. We could do the same thing for use with the metadata proxy
(haproxy). We'll create an OVN *localport* which will be present on every
chassis and this port will have the same MAC/IP address on every host.
Eventually, we can share the same neutron port for both DHCP and metadata.
**Step 2** - Routing metadata API requests to the correct Neutron port
This works similarly to the current approach.
We would program OVN to include a static route in DHCP responses that routes
metadata API requests to the *localport* that is hosting the metadata API
proxy.
Also, in case DHCP isn't enabled or the client ignores the route info, we
will program a static route in the OVN logical router which will still get
metadata requests directed to the right place.
If the DHCP route does not work and the network is isolated, VMs won't get
metadata, but this already happens with the current implementation so this
approach doesn't introduce a regression.
**Step 3** - Management of the namespaces and haproxy instances
We propose a new agent in networking-ovn called ``neutron-ovn-metadata-agent``.
We will run this agent on every hypervisor and it will be responsible for
spawning the haproxy instances for managing the OVS interfaces, network
namespaces and haproxy processes used to proxy metadata API requests.
**Step 4** - Metadata API request processing
Similar to the existing neutron metadata agent, ``neutron-ovn-metadata-agent``
must act as an intermediary between haproxy and the Nova metadata API service.
``neutron-ovn-metadata-agent`` is the process that will have access to the
host networks where the Nova metadata API exists. Each haproxy will be in a
network namespace not able to reach the appropriate host network. Haproxy
will add the necessary headers to the metadata API request and then forward it
to ``neutron-ovn-metadata-agent`` over a UNIX domain socket, which matches the
behavior of the current metadata agent.
Metadata Proxy Management Logic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In neutron-ovn-metadata-agent.
* On startup:
* Do a full sync. Ensure we have all the required metadata proxies running.
For that, the agent would watch the ``Port_Binding`` table of the OVN
Southbound database and look for all rows with the ``chassis`` column set
to the host the agent is running on. For all those entries, make sure a
metadata proxy instance is spawned for every ``datapath`` (Neutron
network) those ports are attached to. The agent will keep record of the
list of networks it currently has proxies running on by updating the
``external-ids`` key ``neutron-metadata-proxy-networks`` of the OVN
``Chassis`` record in the OVN Southbound database that corresponds to this
host. As an example, this key would look like
``neutron-metadata-proxy-networks=NET1_UUID,NET4_UUID`` meaning that this
chassis is hosting one or more VM's connected to networks 1 and 4 so we
should have a metadata proxy instance running for each. Ensure any running
metadata proxies no longer needed are torn down.
* Open and maintain a connection to the OVN Northbound database (using the
ovsdbapp library). On first connection, and anytime a reconnect happens:
* Do a full sync.
* Register a callback for creates/updates/deletes to Logical_Switch_Port rows
to detect when metadata proxies should be started or torn down.
``neutron-ovn-metadata-agent`` will watch OVN Southbound database
(``Port_Binding`` table) to detect when a port gets bound to its chassis. At
that point, the agent will make sure that there's a metadata proxy
attached to the OVN *localport* for the network which this port is connected
to.
* When a new network is created, we must create an OVN *localport* for use
as a metadata proxy.
* When a network is deleted, we must tear down the metadata proxy instance (if
present) on the host and delete the corresponding OVN *localport*.
Launching a metadata proxy includes:
* Creating a network namespace::
$ sudo ip netns add
* Creating a VETH pair (OVS upgrades that upgrade the kernel module will make
internal ports go away and then brought back by OVS scripts. This may cause
some disruption. Therefore, veth pairs are preferred over internal ports)::
$ sudo ip link add 0 type veth peer name 1
* Creating an OVS interface and placing one end in that namespace::
$ sudo ovs-vsctl add-port br-int 0
$ sudo ip link set 1 netns
* Setting the IP and MAC addresses on that interface::
$ sudo ip netns exec \
> ip link set 1 address
$ sudo ip netns exec \
> ip addr add / dev 1
* Bringing the VETH pair up::
$ sudo ip netns exec ip link set 1 up
$ sudo ip link set 0 up
* Set ``external-ids:iface-id=NEUTRON_PORT_UUID`` on the OVS interface so that
OVN is able to correlate this new OVS interface with the correct OVN logical
port::
$ sudo ovs-vsctl set Interface 0 external_ids:iface-id=
* Starting haproxy in this network namespace.
* Add the network UUID to ``external-ids:neutron-metadata-proxy-networks`` on
the Chassis table for our chassis in OVN Southbound database.
Tearing down a metadata proxy includes:
* Removing the network UUID from our chassis.
* Stopping haproxy.
* Deleting the OVS interface.
* Deleting the network namespace.
**Other considerations**
This feature will be enabled by default in ``networking-ovn``, but there
should be a way to disable it in case operators who don't need metadata don't
have to deal with the complexity of it (haproxy instances, network namespaces,
etcetera). In this case, the agent would not create the neutron ports needed
for metadata.
There could be a race condition when the first VM for a certain network boots
on a hypervisor if it does so before the metadata proxy instance has been
spawned.
Right now, the ``vif-plugged`` event to Nova is sent out when the up column
in the OVN Northbound database's Logical_Switch_Port table changes to True,
indicating that the VIF is now up. To overcome this race condition we want
to wait until all network UUID's to which this VM is connected to are present
in ``external-ids:neutron-metadata-proxy-networks`` on the Chassis table
for our chassis in OVN Southbound database. This will delay the event to Nova
until the metadata proxy instance is up and running on the host ensuring the
VM will be able to get the metadata on boot.
Alternatives Considered
-----------------------
Alternative 1: Build metadata support into ovn-controller
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We’ve been building some features useful to OpenStack directly into OVN. DHCP
and DNS are key examples of things we’ve replaced by building them into
ovn-controller. The metadata API case has some key differences that make this
a less attractive solution:
The metadata API is an OpenStack specific feature. DHCP and DNS by contrast
are more clearly useful outside of OpenStack. Building metadata API proxy
support into ovn-controller means embedding an HTTP and TCP stack into
ovn-controller. This is a significant degree of undesired complexity.
This option has been ruled out for these reasons.
Alternative 2: Distributed metadata and High Availability
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this approach, we would spawn a metadata proxy per virtual router or per
network (if isolated), thus, improving the number of metadata proxy instances
running in the cloud. However, scheduling and HA have to be considered. Also,
we wouldn't need the OVN *localport* implementation.
``neutron-ovn-metadata-agent`` would run on any host that we wish to be able
to host metadata API proxies. These hosts must also be running ovn-controller.
Each of these hosts will have a Chassis record in the OVN southbound database
created by ovn-controller. The Chassis table has a column called
``external_ids`` which can be used for general metadata however we see fit.
``neutron-ovn-metadata-agent`` will update its corresponding Chassis record
with an external-id of ``neutron-metadata-proxy-host=true`` to indicate that
this OVN chassis is one capable of hosting metadata proxy instances.
Once we have a way to determine hosts capable of hosting metadata API proxies,
we can add logic to the networking-ovn ML2 driver that schedules metadata API
proxies. This would be triggered by Neutron API requests.
The output of the scheduling process would be setting an ``external_ids`` key
on a Logical_Switch_Port in the OVN northbound database that corresponds with
a metadata proxy. The key could be something like
``neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME``.
``neutron-ovn-metadata-agent`` on each host would also be watching for updates
to these Logical_Switch_Port rows. When it detects that a metadata proxy has
been scheduled locally, it will kick off the process to spawn the local
haproxy instance and get it plugged into OVN.
HA must also be considered. We must know when a host goes down so that all
metadata proxies scheduled to that host can be rescheduled. This is almost
the exact same problem we have with L3 HA. When a host goes down, we need to
trigger rescheduling gateways to other hosts. We should ensure that the
approach used for rescheduling L3 gateways can be utilized for rescheduling
metadata proxies, as well.
In neutron-server (networking-ovn).
Introduce a new networking-ovn configuration option:
* ``[ovn] isolated_metadata=[True|False]``
Events that trigger scheduling a new metadata proxy:
* If isolated_metadata is True
* When a new network is created, we must create an OVN logical port for use
as a metadata proxy and then schedule this to one of the
``neutron-ovn-metadata-agent`` instances.
* If isolated_metadata is False
* When a network is attached to or removed from a logical router, ensure
that at least one of the networks has a metadata proxy port already
created. If not, pick a network and create a metadata proxy port and then
schedule it to an agent. At this point, we need to update the static route
for metadata API.
Events that trigger unscheduling an existing metadata proxy:
* When a network is deleted, delete the metadata proxy port if it exists and
unschedule it from a ``neutron-ovn-metadata-agent``.
To schedule a new metadata proxy:
* Determine the list of available OVN Chassis that can host metadata proxies
by reading the ``Chassis`` table of the OVN Southbound database. Look for
chassis that have an external-id of ``neutron-metadata-proxy-host=true``.
* Of the available OVN chassis, choose the one “least loadedâ€, or currently
hosting the fewest number of metadata proxies.
* Set ``neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME`` as an external-id on
the Logical_Switch_Port in the OVN Northbound database that corresponds to
the neutron port used for this metadata proxy. ``CHASSIS_HOSTNAME`` maps to
the hostname row of a Chassis record in the OVN Southbound database.
This approach has been ruled out for its complexity although we have analyzed
the details deeply because, eventually, and depending on the implementation of
L3 HA, we will want to evolve to it.
Other References
----------------
* Haproxy config --
https://review.openstack.org/#/c/431691/34/neutron/agent/metadata/driver.py
* https://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
networking-ovn-4.0.0/doc/source/contributor/design/index.rst 0000666 0001751 0001751 00000000231 13245511145 024256 0 ustar zuul zuul 0000000 0000000 ============
Design Notes
============
.. toctree::
:maxdepth: 1
data_model
native_dhcp
ovn_worker
metadata_api
database_consistency
networking-ovn-4.0.0/doc/source/contributor/design/ovn_worker.rst 0000666 0001751 0001751 00000006670 13245511145 025357 0 ustar zuul zuul 0000000 0000000 OVN Neutron Worker and Port status handling
===========================================
When the logical switch port's VIF is attached or removed to/from the ovn
integration bridge, ovn-northd updates the Logical_Switch_Port.up to 'True'
or 'False' accordingly.
In order for the OVN Neutron ML2 driver to update the corresponding neutron
port's status to 'ACTIVE' or 'DOWN' in the db, it needs to monitor the
OVN Northbound db. A neutron worker is created for this purpose.
The implementation of the ovn worker can be found here -
'networking_ovn.ovsdb.ovsdb_monitor.OvnWorker'.
Neutron service will create 'n' api workers and 'm' rpc workers and 1 ovn
worker (all these workers are separate processes).
Api workers and rpc workers will create ovsdb idl client object
('ovs.db.idl.Idl') to connect to the OVN_Northbound db.
See 'networking_ovn.ovsdb.impl_idl_ovn.OvsdbNbOvnIdl' and
'ovsdbapp.backend.ovs_idl.connection.Connection' classes for more details.
Ovn worker will create 'networking_ovn.ovsdb.ovsdb_monitor.OvnIdl' class
object (which inherits from 'ovs.db.idl.Idl') to connect to the
OVN_Northbound db. On receiving the OVN_Northbound db updates from the
ovsdb-server, 'notify' function of 'OVnIdl' is called by the parent class
object.
OvnIdl.notify() function passes the received events to the
ovsdb_monitor.OvnDbNotifyHandler class.
ovsdb_monitor.OvnDbNotifyHandler checks for any changes in
the 'Logical_Switch_Port.up' and updates the neutron port's status accordingly.
If 'notify_nova_on_port_status_changes' configuration is set, then neutron
would notify nova on port status changes.
ovsdb locks
-----------
If there are multiple neutron servers running, then each neutron server will
have one ovn worker which listens for the notify events. When the
'Logical_Switch_Port.up' is updated by ovn-northd, we do not want all the
neutron servers to handle the event and update the neutron port status.
In order for only one neutron server to handle the events, ovsdb locks are
used.
At start, each neutron server's ovn worker will try to acquire a lock with id -
'neutron_ovn_event_lock'. The ovn worker which has acquired the lock will
handle the notify events.
In case the neutron server with the lock dies, ovsdb-server will assign the
lock to another neutron server in the queue.
More details about the ovsdb locks can be found here [1] and [2]
[1] - https://tools.ietf.org/html/draft-pfaff-ovsdb-proto-04#section-4.1.8
[2] - https://github.com/openvswitch/ovs/blob/branch-2.4/python/ovs/db/idl.py#L67
One thing to note is the ovn worker (with OvnIdl) do not carry out any
transactions to the OVN Northbound db.
Since the api and rpc workers are not configured with any locks,
using the ovsdb lock on the OVN_Northbound and OVN_Southbound DBs by the ovn
workers will not have any side effects to the transactions done by these api
and rpc workers.
Handling port status changes when neutron server(s) are down
------------------------------------------------------------
When neutron server starts, ovn worker would receive a dump of all
logical switch ports as events. 'ovsdb_monitor.OvnDbNotifyHandler' would
sync up if there are any inconsistencies in the port status.
OVN Southbound DB Access
------------------------
The OVN Neutron ML2 driver has a need to acquire chassis information (hostname
and physnets combinations). This is required initially to support routed
networks. Thus, the plugin will initiate and maintain a connection to the OVN
SB DB during startup.
networking-ovn-4.0.0/doc/source/contributor/testing.rst 0000666 0001751 0001751 00000073373 13245511145 023374 0 ustar zuul zuul 0000000 0000000 Testing with DevStack
=====================
This document describes how to test OpenStack with OVN using DevStack. We will
start by describing how to test on a single host.
Single Node Test Environment
----------------------------
1. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
::
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and networking-ovn.
::
$ sudo su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ git clone https://git.openstack.org/openstack/networking-ovn.git
4. Configure DevStack to use networking-ovn.
networking-ovn comes with a sample DevStack configuration file you can start
with. For example, you may want to set some values for the various PASSWORD
variables in that file so DevStack doesn't have to prompt you for them. Feel
free to edit it if you'd like, but it should work as-is.
::
$ cd devstack
$ cp ../networking-ovn/devstack/local.conf.sample local.conf
5. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.
::
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this::
This is your host IP address: 172.16.189.6
This is your host IPv6 address: ::1
Horizon is now available at http://172.16.189.6/dashboard
Keystone is serving at http://172.16.189.6/identity/
The default users are: admin and demo
The password: password
2017-03-09 15:10:54.117 | stack.sh completed in 2110 seconds.
Environment Variables
---------------------
Once DevStack finishes successfully, we're ready to start interacting with
OpenStack APIs. OpenStack provides a set of command line tools for interacting
with these APIs. DevStack provides a file you can source to set up the right
environment variables to make the OpenStack command line tools work.
::
$ . openrc
If you're curious what environment variables are set, they generally start with
an OS prefix::
$ env | grep OS
OS_REGION_NAME=RegionOne
OS_IDENTITY_API_VERSION=2.0
OS_PASSWORD=password
OS_AUTH_URL=http://192.168.122.8:5000/v2.0
OS_USERNAME=demo
OS_TENANT_NAME=demo
OS_VOLUME_API_VERSION=2
OS_CACERT=/opt/stack/data/CA/int-ca/ca-chain.pem
OS_NO_CACHE=1
Default Network Configuration
-----------------------------
By default, DevStack creates networks called ``private`` and ``public``.
Run the following command to see the existing networks::
$ openstack network list
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 40080dad-0064-480a-b1b0-592ae51c1471 | private | 5ff81545-7939-4ae0-8365-1658d45fa85c, da34f952-3bfc-45bb-b062-d2d973c1a751 |
| 7ec986dd-aae4-40b5-86cf-8668feeeab67 | public | 60d0c146-a29b-4cd3-bd90-3745603b1a4b, f010c309-09be-4af2-80d6-e6af9c78bae7 |
+--------------------------------------+---------+----------------------------------------------------------------------------+
A Neutron network is implemented as an OVN logical switch. networking-ovn
creates logical switches with a name in the format neutron-.
We can use ``ovn-nbctl`` to list the configured logical switches and see that
their names correlate with the output from ``neutron net-list``::
$ ovn-nbctl ls-list
71206f5c-b0e6-49ce-b572-eb2e964b2c4e (neutron-40080dad-0064-480a-b1b0-592ae51c1471)
8d8270e7-fd51-416f-ae85-16565200b8a4 (neutron-7ec986dd-aae4-40b5-86cf-8668feeeab67)
$ ovn-nbctl get Logical_Switch neutron-40080dad-0064-480a-b1b0-592ae51c1471 external_ids
{"neutron:network_name"=private}
Booting VMs
-----------
In this section we'll go through the steps to create two VMs that have a
virtual NIC attached to the ``private`` Neutron network.
DevStack uses libvirt as the Nova backend by default. If KVM is available, it
will be used. Otherwise, it will just run qemu emulated guests. This is
perfectly fine for our testing, as we only need these VMs to be able to send
and receive a small amount of traffic so performance is not very important.
1. Get the Network UUID.
Start by getting the UUID for the ``private`` network from the output of
``neutron net-list`` from earlier and save it off::
$ PRIVATE_NET_ID=40080dad-0064-480a-b1b0-592ae51c1471
2. Create an SSH keypair.
Next create an SSH keypair in Nova. Later, when we boot a VM, we'll ask that
the public key be put in the VM so we can SSH into it.
::
$ openstack keypair create demo > id_rsa_demo
$ chmod 600 id_rsa_demo
3. Choose a flavor.
We need minimal resources for these test VMs, so the ``m1.nano`` flavor is
sufficient.
::
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 42 | m1.nano | 64 | 0 | 0 | 1 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
| 84 | m1.micro | 128 | 0 | 0 | 1 | True |
| c1 | cirros256 | 256 | 0 | 0 | 1 | True |
| d1 | ds512M | 512 | 5 | 0 | 1 | True |
| d2 | ds1G | 1024 | 10 | 0 | 1 | True |
| d3 | ds2G | 2048 | 10 | 0 | 2 | True |
| d4 | ds4G | 4096 | 20 | 0 | 4 | True |
+----+-----------+-------+------+-----------+-------+-----------+
$ FLAVOR_ID=42
4. Choose an image.
DevStack imports the CirrOS image by default, which is perfect for our testing.
It's a very small test image.
::
$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 849a8db2-3754-4cf6-9271-491fa4ff7195 | cirros-0.3.5-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+
$ IMAGE_ID=849a8db2-3754-4cf6-9271-491fa4ff7195
5. Setup a security rule so that we can access the VMs we will boot up next.
By default, DevStack does not allow users to access VMs, to enable that, we
will need to add a rule. We will allow both ICMP and SSH.
::
$ openstack security group rule create --ingress --ethertype IPv4 --dst-port 22 --protocol tcp default
$ openstack security group rule create --ingress --ethertype IPv4 --protocol ICMP default
$ openstack security group rule list
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group | Security Group |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
...
| ade97198-db44-429e-9b30-24693d86d9b1 | tcp | 0.0.0.0/0 | 22:22 | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
| d0861a98-f90e-4d1a-abfb-827b416bc2f6 | icmp | 0.0.0.0/0 | | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
...
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
$ neutron security-group-rule-create --direction ingress --ethertype IPv4 --port-range-min 22 --port-range-max 22 --protocol tcp default
$ neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol ICMP default
$ neutron security-group-rule-list
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
| id | security_group | direction | ethertype | protocol/port | remote |
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
| 8b2edbe6-790e-40ef-af54-c7b64ced8240 | default | ingress | IPv4 | 22/tcp | any |
| 5bee0179-807b-41d7-ab16-6de6ac051335 | default | ingress | IPv4 | icmp | any |
...
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
6. Boot some VMs.
Now we will boot two VMs. We'll name them ``test1`` and ``test2``.
::
$ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test1
+-----------------------------+-----------------------------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | BzAWWA6byGP6 |
| config_drive | |
| created | 2017-03-09T16:56:08Z |
| flavor | m1.nano (42) |
| hostId | |
| id | d8b8084e-58ff-44f4-b029-a57e7ef6ba61 |
| image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
| key_name | demo |
| name | test1 |
| progress | 0 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-03-09T16:56:08Z |
| user_id | c68f77f1d85e43eb9e5176380a68ac1f |
| volumes_attached | |
+-----------------------------+-----------------------------------------------------------------+
$ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test2
+-----------------------------+-----------------------------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | YB8dmt5v88JV |
| config_drive | |
| created | 2017-03-09T16:56:50Z |
| flavor | m1.nano (42) |
| hostId | |
| id | 170d4f37-9299-4a08-b48b-2b90fce8e09b |
| image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
| key_name | demo |
| name | test2 |
| progress | 0 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-03-09T16:56:51Z |
| user_id | c68f77f1d85e43eb9e5176380a68ac1f |
| volumes_attached | |
+-----------------------------+-----------------------------------------------------------------+
Once both VMs have been started, they will have a status of ``ACTIVE``::
$ openstack server list
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
| 170d4f37-9299-4a08-b48b-2b90fce8e09b | test2 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe24:49df, 10.0.0.3 | cirros-0.3.5-x86_64-disk |
| d8b8084e-58ff-44f4-b029-a57e7ef6ba61 | test1 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe3f:953d, 10.0.0.10 | cirros-0.3.5-x86_64-disk |
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
Our two VMs have addresses of ``10.0.0.3`` and ``10.0.0.10``. If we list
Neutron ports, there are two new ports with these addresses associated
with them::
$ openstack port list
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
...
| 97c970b0-485d-47ec-868d-783c2f7acde3 | | fa:16:3e:3f:95:3d | ip_address='10.0.0.10', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
| | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe3f:953d', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
| e003044d-334a-4de3-96d9-35b2d2280454 | | fa:16:3e:24:49:df | ip_address='10.0.0.3', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
| | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe24:49df', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
...
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
$ TEST1_PORT_ID=97c970b0-485d-47ec-868d-783c2f7acde3
$ TEST2_PORT_ID=e003044d-334a-4de3-96d9-35b2d2280454
Now we can look at OVN using ``ovn-nbctl`` to see the logical switch ports
that were created for these two Neutron ports. The first part of the output
is the OVN logical switch port UUID. The second part in parentheses is the
logical switch port name. Neutron sets the logical switch port name equal to
the Neutron port ID.
::
$ ovn-nbctl lsp-list neutron-$PRIVATE_NET_ID
...
fde1744b-e03b-46b7-b181-abddcbe60bf2 (97c970b0-485d-47ec-868d-783c2f7acde3)
7ce284a8-a48a-42f5-bf84-b2bca62cd0fe (e003044d-334a-4de3-96d9-35b2d2280454)
...
These two ports correspond to the two VMs we created.
VM Connectivity
---------------
We can connect to our VMs by associating a floating IP address from the public
network.
::
$ openstack floating ip create --port $TEST1_PORT_ID public
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-03-09T18:58:12Z |
| description | |
| fixed_ip_address | 10.0.0.10 |
| floating_ip_address | 172.24.4.8 |
| floating_network_id | 7ec986dd-aae4-40b5-86cf-8668feeeab67 |
| id | 24ff0799-5a72-4a5b-abc0-58b301c9aee5 |
| name | None |
| port_id | 97c970b0-485d-47ec-868d-783c2f7acde3 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| revision_number | 1 |
| router_id | ee51adeb-0dd8-4da0-ab6f-7ce60e00e7b0 |
| status | DOWN |
| updated_at | 2017-03-09T18:58:12Z |
+---------------------+--------------------------------------+
Devstack does not wire up the public network by default so we must do
that before connecting to this floating IP address.
::
$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Now you should be able to connect to the VM via its floating IP address.
First, ping the address.
::
$ ping -c 1 172.24.4.8
PING 172.24.4.8 (172.24.4.8) 56(84) bytes of data.
64 bytes from 172.24.4.8: icmp_seq=1 ttl=63 time=0.823 ms
--- 172.24.4.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.823/0.823/0.823/0.000 ms
Now SSH to the VM::
$ ssh -i id_rsa_demo cirros@172.24.4.8 hostname
test1
Adding Another Compute Node
---------------------------
After completing the earlier instructions for setting up devstack, you can use
a second VM to emulate an additional compute node. This is important for OVN
testing as it exercises the tunnels created by OVN between the hypervisors.
Just as before, create a throwaway VM but make sure that this VM has a
different host name. Having same host name for both VMs will confuse Nova and
will not produce two hypervisors when you query nova hypervisor list later.
Once the VM is setup, create the ``stack`` user::
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
Switch to the ``stack`` user and clone DevStack and networking-ovn::
$ sudo su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ git clone https://git.openstack.org/openstack/networking-ovn.git
networking-ovn comes with another sample configuration file that can be used
for this::
$ cd devstack
$ cp ../networking-ovn/devstack/computenode-local.conf.sample local.conf
You must set SERVICE_HOST in local.conf. The value should be the IP address of
the main DevStack host. You must also set HOST_IP to the IP address of this
new host. See the text in the sample configuration file for more
information. Once that is complete, run DevStack::
$ cd devstack
$ ./stack.sh
This should complete in less time than before, as it's only running a single
OpenStack service (nova-compute) along with OVN (ovn-controller, ovs-vswitchd,
ovsdb-server). The final output will look something like this::
This is your host IP address: 172.16.189.30
This is your host IPv6 address: ::1
2017-03-09 18:39:27.058 | stack.sh completed in 1149 seconds.
Now go back to your main DevStack host. You can use admin credentials to
verify that the additional hypervisor has been added to the deployment::
$ cd devstack
$ . openrc admin
$ openstack hypervisor list
+----+------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+------------------------+-----------------+---------------+-------+
| 1 | centos7-ovn-devstack | QEMU | 172.16.189.6 | up |
| 2 | centos7-ovn-devstack-2 | QEMU | 172.16.189.30 | up |
+----+------------------------+-----------------+---------------+-------+
You can also look at OVN and OVS to see that the second host has shown up. For
example, there will be a second entry in the Chassis table of the
OVN_Southbound database. You can use the ``ovn-sbctl`` utility to list
chassis, their configuration, and the ports bound to each of them::
$ ovn-sbctl show
Chassis "ddc8991a-d838-4758-8d15-71032da9d062"
hostname: "centos7-ovn-devstack"
Encap vxlan
ip: "172.16.189.6"
options: {csum="true"}
Encap geneve
ip: "172.16.189.6"
options: {csum="true"}
Port_Binding "97c970b0-485d-47ec-868d-783c2f7acde3"
Port_Binding "e003044d-334a-4de3-96d9-35b2d2280454"
Port_Binding "cr-lrp-08d1f28d-cc39-4397-b12b-7124080899a1"
Chassis "b194d07e-0733-4405-b795-63b172b722fd"
hostname: "centos7-ovn-devstack-2.os1.phx2.redhat.com"
Encap geneve
ip: "172.16.189.30"
options: {csum="true"}
Encap vxlan
ip: "172.16.189.30"
options: {csum="true"}
You can also see a tunnel created to the other compute node::
$ ovs-vsctl show
...
Bridge br-int
fail_mode: secure
...
Port "ovn-b194d0-0"
Interface "ovn-b194d0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.189.30"}
...
...
Provider Networks
-----------------
Neutron has a "provider networks" API extension that lets you specify
some additional attributes on a network. These attributes let you
map a Neutron network to a physical network in your environment.
The OVN ML2 driver is adding support for this API extension. It currently
supports "flat" and "vlan" networks.
Here is how you can test it:
First you must create an OVS bridge that provides connectivity to the
provider network on every host running ovn-controller. For trivial
testing this could just be a dummy bridge. In a real environment, you
would want to add a local network interface to the bridge, as well.
::
$ ovs-vsctl add-br br-provider
ovn-controller on each host must be configured with a mapping between
a network name and the bridge that provides connectivity to that network.
In this case we'll create a mapping from the network name "providernet"
to the bridge 'br-provider".
::
$ ovs-vsctl set open . \
external-ids:ovn-bridge-mappings=providernet:br-provider
Now create a Neutron provider network.
::
$ neutron net-create provider --shared \
--provider:physical_network providernet \
--provider:network_type flat
Alternatively, you can define connectivity to a VLAN instead of a flat network:
::
$ neutron net-create provider-101 --shared \
--provider:physical_network providernet \
--provider:network_type vlan \
--provider:segmentation_id 101
Observe that the OVN ML2 driver created a special logical switch port of type
localnet on the logical switch to model the connection to the physical network.
::
$ ovn-nbctl show
...
switch 5bbccbbd-f5ca-411b-bad9-01095d6f1316 (neutron-729dbbee-db84-4a3d-afc3-82c0b3701074)
port provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
addresses: ["unknown"]
...
$ ovn-nbctl lsp-get-type provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
localnet
$ ovn-nbctl lsp-get-options provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
network_name=providernet
If VLAN is used, there will be a VLAN tag shown on the localnet port as well.
Finally, create a Neutron port on the provider network.
::
$ neutron port-create provider
or if you followed the VLAN example, it would be:
::
$ neutron port-create provider-101
Run Unit Tests
--------------
Run the unit tests in the local environment with ``tox``.
::
$ tox -e py27
$ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync
$ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync.TestOvnSbSyncML2
$ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync.TestOvnSbSyncML2
.test_ovn_sb_sync
Run Functional Tests
--------------------
you can run the functional tests with ``tox`` in your devstack environment:
::
$ cd networking_ovn/tests/functional
$ tox -e dsvm-functional
$ tox -e dsvm-functional networking_ovn.tests.functional.test_mech_driver\
.TestPortBinding.test_port_binding_create_port
If you want to run functional tests in your local clean environment, you may
need a new working directory.
::
$ export BASE=/opt/stack
$ mkdir -p /opt/stack/new
$ cd /opt/stack/new
Next, get networking_ovn, neutron and devstack.
::
$ git clone https://git.openstack.org/openstack/networking-ovn.git
$ git clone https://git.openstack.org/openstack/neutron.git
$ git clone https://git.openstack.org/openstack-dev/devstack.git
Then execute the script to prepare the environment.
::
$ cd networking-ovn/
$ ./networking_ovn/tests/contrib/gate_hook.sh
Finally, run the functional tests with ``tox``
::
$ cd networking_ovn/tests/functional
$ tox -e dsvm-functional
$ tox -e dsvm-functional networking_ovn.tests.functional.test_mech_driver\
.TestPortBinding.test_port_binding_create_port
Skydive
-------
`Skydive `_ is an open source
real-time network topology and protocols analyzer. It aims to provide a
comprehensive way of understanding what is happening in the network
infrastructure. Skydive works by utilizing agents to collect host-local
information, and sending this information to a central agent for
further analysis. It utilizes elasticsearch to store the data.
To enable Skydive support with OVN and devstack, enable it on the control
and compute nodes.
On the control node, enable it as follows:
::
enable_plugin skydive https://github.com/skydive-project/skydive.git
enable_service skydive-analyzer
On the compute nodes, enable it as follows:
::
enable_plugin skydive https://github.com/skydive-project/skydive.git
enable_service skydive-agent
Troubleshooting
---------------
If you run into any problems, take a look at our :doc:`/admin/troubleshooting`
page.
Additional Resources
--------------------
See the documentation and other references linked
from the :doc:`/admin/ovn` page.
networking-ovn-4.0.0/doc/source/contributor/contributing.rst 0000666 0001751 0001751 00000000116 13245511145 024407 0 ustar zuul zuul 0000000 0000000 ============
Contributing
============
.. include:: ../../../CONTRIBUTING.rst
networking-ovn-4.0.0/doc/source/contributor/index.rst 0000666 0001751 0001751 00000000230 13245511164 023005 0 ustar zuul zuul 0000000 0000000 =========================
Contributor Documentation
=========================
.. toctree::
:maxdepth: 2
contributing
testing
design/index
networking-ovn-4.0.0/doc/source/index.rst 0000666 0001751 0001751 00000000752 13245511145 020443 0 ustar zuul zuul 0000000 0000000 .. networking-ovn documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. the main title comes from README.rst
.. include:: ../../README.rst
Contents
--------
.. toctree::
:maxdepth: 2
admin/index
install/index
configuration/index
contributor/index
.. rubric:: Indices and tables
* :ref:`genindex`
* :ref:`search`
networking-ovn-4.0.0/doc/source/install/ 0000775 0001751 0001751 00000000000 13245511554 020246 5 ustar zuul zuul 0000000 0000000 networking-ovn-4.0.0/doc/source/install/index.rst 0000666 0001751 0001751 00000024132 13245511145 022107 0 ustar zuul zuul 0000000 0000000 .. _installation:
Install & Configuration
=======================
The ``networking-ovn`` repository includes integration with DevStack that
enables creation of a simple Open Virtual Network (OVN) development and test
environment. This document discusses what is required for manual installation
or integration into a production OpenStack deployment tool of conventional
architectures that include the following types of nodes:
* Controller - Runs OpenStack control plane services such as REST APIs
and databases.
* Network - Runs the layer-2, layer-3 (routing), DHCP, and metadata agents
for the Networking service. Some agents optional. Usually provides
connectivity between provider (public) and project (private) networks
via NAT and floating IP addresses.
.. note::
Some tools deploy these services on controller nodes.
* Compute - Runs the hypervisor and layer-2 agent for the Networking
service.
Packaging
---------
Open vSwitch (OVS) includes OVN beginning with version 2.5 and considers
it experimental. The Networking service integration for OVN uses an
independent package, typically ``networking-ovn``.
Building OVS from source automatically installs OVN. For deployment tools
using distribution packages, the ``openvswitch-ovn`` package for RHEL/CentOS
and compatible distributions automatically installs ``openvswitch`` as a
dependency. Ubuntu/Debian includes ``ovn-central``, ``ovn-host``,
``ovn-docker``, and ``ovn-common`` packages that pull in the appropriate Open
vSwitch dependencies as needed.
A ``python-networking-ovn`` RPM may be obtained for Fedora or CentOS from
the RDO project. A package based on the ``master`` branch of
``networking-ovn`` can be found at https://trunk.rdoproject.org/.
Fedora and CentOS RPM builds of OVS and OVN from the ``master`` branch of
``ovs`` can be found in this COPR repository:
https://copr.fedorainfracloud.org/coprs/leifmadsen/ovs-master/.
Controller nodes
----------------
Each controller node runs the OVS service (including dependent services such
as ``ovsdb-server``) and the ``ovn-northd`` service. However, only a single
instance of the ``ovsdb-server`` and ``ovn-northd`` services can operate in
a deployment. However, deployment tools can implement active/passive
high-availability using a management tool that monitors service health
and automatically starts these services on another node after failure of the
primary node. See the :ref:`faq` for more information.
#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
#. Start the OVS service. The central OVS service starts the ``ovsdb-server``
service that manages OVN databases.
Using the *systemd* unit:
.. code-block:: console
# systemctl start openvswitch
Using the ``ovs-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
#. Configure the ``ovsdb-server`` component. By default, the ``ovsdb-server``
service only permits local access to databases via Unix socket. However,
OVN services on compute nodes require access to these databases.
* Permit remote database access.
.. code-block:: console
# ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:IP_ADDRESS
Replace ``IP_ADDRESS`` with the IP address of the management network
interface on the controller node.
.. note::
Permit remote access to TCP port 6640 on any host firewall.
#. Start the ``ovn-northd`` service.
Using the *systemd* unit:
.. code-block:: console
# systemctl start ovn-northd
Using the ``ovn-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovn-ctl start_northd
Options for *start_northd*:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovn-ctl start_northd --help
# ...
# DB_NB_SOCK="/usr/local/etc/openvswitch/nb_db.sock"
# DB_NB_PID="/usr/local/etc/openvswitch/ovnnb_db.pid"
# DB_SB_SOCK="usr/local/etc/openvswitch/sb_db.sock"
# DB_SB_PID="/usr/local/etc/openvswitch/ovnsb_db.pid"
# ...
#. Configure the Networking server component. The Networking service
implements OVN as an ML2 driver. Edit the ``/etc/neutron/neutron.conf``
file:
* Enable the ML2 core plug-in.
.. code-block:: ini
[DEFAULT]
...
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
* Enable the OVN layer-3 service.
.. code-block:: ini
[DEFAULT]
...
service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin
#. Configure the ML2 plug-in. Edit the
``/etc/neutron/plugins/ml2/ml2_conf.ini`` file:
* Configure the OVN mechanism driver, network type drivers, self-service
(tenant) network types, and enable the port security extension.
.. code-block:: ini
[ml2]
...
mechanism_drivers = ovn
type_drivers = local,flat,vlan,geneve
tenant_network_types = geneve
extension_drivers = port_security
overlay_ip_version = 4
.. note::
To enable VLAN self-service networks, add ``vlan`` to the
``tenant_network_types`` option. The first network type
in the list becomes the default self-service network type.
To use IPv6 for all overlay (tunnel) network endpoints,
set the ``overlay_ip_version`` option to ``6``.
* Configure the Geneve ID range and maximum header size. The IP version
overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added
to the maximum header size based on the ML2 ``overlay_ip_version``
option.
.. code-block:: ini
[ml2_type_geneve]
...
vni_ranges = 1:65536
max_header_size = 38
.. note::
The Networking service uses the ``vni_ranges`` option to allocate
network segments. However, OVN ignores the actual values. Thus, the ID
range only determines the quantity of Geneve networks in the
environment. For example, a range of ``5001:6000`` defines a maximum
of 1000 Geneve networks.
* Optionally, enable support for VLAN provider and self-service
networks on one or more physical networks. If you specify only
the physical network, only administrative (privileged) users can
manage VLAN networks. Additionally specifying a VLAN ID range for
a physical network enables regular (non-privileged) users to
manage VLAN networks. The Networking service allocates the VLAN ID
for each self-service network using the VLAN ID range for the
physical network.
.. code-block:: ini
[ml2_type_vlan]
...
network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
Replace ``PHYSICAL_NETWORK`` with the physical network name and
optionally define the minimum and maximum VLAN IDs. Use a comma
to separate each physical network.
For example, to enable support for administrative VLAN networks
on the ``physnet1`` network and self-service VLAN networks on
the ``physnet2`` network using VLAN IDs 1001 to 2000:
.. code-block:: ini
network_vlan_ranges = physnet1,physnet2:1001:2000
* Enable security groups.
.. code-block:: ini
[securitygroup]
...
enable_security_group = true
.. note::
The ``firewall_driver`` option under ``[securitygroup]`` is ignored
since the OVN ML2 driver itself handles security groups.
* Configure OVS database access and L3 scheduler
.. code-block:: ini
[ovn]
...
ovn_nb_connection = tcp:IP_ADDRESS:6641
ovn_sb_connection = tcp:IP_ADDRESS:6642
ovn_l3_scheduler = OVN_L3_SCHEDULER
.. note::
Replace ``IP_ADDRESS`` with the IP address of the controller node that
runs the ``ovsdb-server`` service. Replace ``OVN_L3_SCHEDULER`` with
``leastloaded`` if you want the scheduler to select a compute node with
the least number of gateway ports or ``chance`` if you want the
scheduler to randomly select a compute node from the available list of
compute nodes.
#. Start the ``neutron-server`` service.
Network nodes
-------------
Deployments using OVN native layer-3 and DHCP services do not require
conventional network nodes because connectivity to external networks
(including VTEP gateways) and routing occurs on compute nodes.
Compute nodes
-------------
Each compute node runs the OVS and ``ovn-controller`` services. The
``ovn-controller`` service replaces the conventional OVS layer-2 agent.
#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
#. Start the OVS service.
Using the *systemd* unit:
.. code-block:: console
# systemctl start openvswitch
Using the ``ovs-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
#. Configure the OVS service.
* Use OVS databases on the controller node.
.. code-block:: console
# ovs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642
Replace ``IP_ADDRESS`` with the IP address of the controller node
that runs the ``ovsdb-server`` service.
* Enable one or more overlay network protocols. At a minimum, OVN requires
enabling the ``geneve`` protocol. Deployments using VTEP gateways should
also enable the ``vxlan`` protocol.
.. code-block:: console
# ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
.. note::
Deployments without VTEP gateways can safely enable both protocols.
* Configure the overlay network local endpoint IP address.
.. code-block:: console
# ovs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS
Replace ``IP_ADDRESS`` with the IP address of the overlay network
interface on the compute node.
#. Start the ``ovn-controller`` service.
Using the *systemd* unit:
.. code-block:: console
# systemctl start ovn-controller
Using the ``ovn-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovn-ctl start_controller
Verify operation
----------------
#. Each compute node should contain an ``ovn-controller`` instance.
.. code-block:: console
# ovn-sbctl show