os-xenapi-0.3.1/0000775000175000017500000000000013160424745014552 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/exclusion_py3.txt0000664000175000017500000000022513160424533020111 0ustar jenkinsjenkins00000000000000# The XenAPI plugins run in a Python 2 environment, so avoid attempting # to run their unit tests in a Python 3 environment os_xenapi.tests.plugins os-xenapi-0.3.1/LICENSE0000664000175000017500000002363713160424533015565 0ustar jenkinsjenkins00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. os-xenapi-0.3.1/.coveragerc0000664000175000017500000000010613160424533016663 0ustar jenkinsjenkins00000000000000[run] branch = True source = os_xenapi [report] ignore_errors = True os-xenapi-0.3.1/setup.py0000664000175000017500000000200613160424533016255 0ustar jenkinsjenkins00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) os-xenapi-0.3.1/.mailmap0000664000175000017500000000013113160424533016161 0ustar jenkinsjenkins00000000000000# Format is: # # os-xenapi-0.3.1/HACKING.rst0000664000175000017500000000024113160424533016340 0ustar jenkinsjenkins00000000000000os-xenapi Style Commandments =============================================== Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/ os-xenapi-0.3.1/setup.cfg0000664000175000017500000000233013160424745016371 0ustar jenkinsjenkins00000000000000[metadata] name = os-xenapi summary = XenAPI library for OpenStack projects description-file = README.rst author = Citrix author-email = openstack@citrix.com home-page = http://www.citrix.com classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.3 Programming Language :: Python :: 3.5 [files] packages = os_xenapi [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 [upload_sphinx] upload-dir = doc/build/html [compile_catalog] directory = os_xenapi/locale domain = os_xenapi [update_catalog] domain = os_xenapi output_dir = os_xenapi/locale input_file = os_xenapi/locale/os_xenapi.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = os_xenapi/locale/os_xenapi.pot [build_releasenotes] all_files = 1 build-dir = releasenotes/build source-dir = releasenotes/source [egg_info] tag_build = tag_date = 0 os-xenapi-0.3.1/test-requirements.txt0000664000175000017500000000107213160424533021006 0ustar jenkinsjenkins00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking<0.12,>=0.11.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 python-subunit>=0.0.18 # Apache-2.0/BSD sphinx>=1.6.2 # BSD oslosphinx>=4.7.0 # Apache-2.0 oslotest>=1.10.0 # Apache-2.0 os-testr>=1.0.0 # Apache-2.0 testrepository>=0.0.18 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=1.4.0 # MIT # releasenotes reno>=2.5.0 # Apache-2.0 os-xenapi-0.3.1/requirements.txt0000664000175000017500000000072413160424533020034 0ustar jenkinsjenkins00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 Babel!=2.4.0,>=2.3.4 # BSD eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT oslo.concurrency>=3.20.0 # Apache-2.0 oslo.log>=3.30.0 # Apache-2.0 oslo.utils>=3.28.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 six>=1.9.0 # MIT os-xenapi-0.3.1/babel.cfg0000664000175000017500000000002113160424533016264 0ustar jenkinsjenkins00000000000000[python: **.py] os-xenapi-0.3.1/doc/0000775000175000017500000000000013160424745015317 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/doc/source/0000775000175000017500000000000013160424745016617 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/doc/source/conf.py0000775000175000017500000000461513160424533020122 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', #'sphinx.ext.intersphinx', 'oslosphinx' ] # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'os-xenapi' copyright = u'2016, Citrix Systems' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # html_static_path = ['static'] # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, u'%s Documentation' % project, u'Citrix Systems', 'manual'), ] # Example configuration for intersphinx: refer to the Python standard library. #intersphinx_mapping = {'http://docs.python.org/': None} os-xenapi-0.3.1/doc/source/installation.rst0000664000175000017500000000030413160424533022042 0ustar jenkinsjenkins00000000000000============ Installation ============ At the command line:: $ pip install os-xenapi Or, if you have virtualenvwrapper installed:: $ mkvirtualenv os-xenapi $ pip install os-xenapi os-xenapi-0.3.1/doc/source/readme.rst0000664000175000017500000000003613160424533020600 0ustar jenkinsjenkins00000000000000.. include:: ../../README.rst os-xenapi-0.3.1/doc/source/contributing.rst0000664000175000017500000000011313160424533022046 0ustar jenkinsjenkins00000000000000============ Contributing ============ .. include:: ../../CONTRIBUTING.rst os-xenapi-0.3.1/doc/source/index.rst0000664000175000017500000000076513160424533020463 0ustar jenkinsjenkins00000000000000.. os-xenapi documentation master file, created by sphinx-quickstart on Tue Jul 9 22:26:36 2013. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to os-xenapi's documentation! ======================================================== Contents: .. toctree:: :maxdepth: 2 readme installation usage contributing Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` os-xenapi-0.3.1/doc/source/usage.rst0000664000175000017500000000011713160424533020447 0ustar jenkinsjenkins00000000000000======== Usage ======== To use os-xenapi in a project:: import os_xenapi os-xenapi-0.3.1/devstack/0000775000175000017500000000000013160424745016356 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/devstack/dom0_functions0000664000175000017500000000507513160424533021232 0ustar jenkinsjenkins00000000000000#!/bin/bash function dom0_plugin_location { for PLUGIN_DIR in "/etc/xapi.d/plugins" "/usr/lib/xcp/plugins" "/usr/lib/xapi/plugins" "/usr/lib64/xapi/plugins"; do if [ -d $PLUGIN_DIR ]; then echo $PLUGIN_DIR return 0 fi done return 1 } function get_default_sr { xe pool-list params=default-SR minimal=true } function get_local_sr_path { pbd_device_config_path=`xe pbd-list sr-uuid=$(get_default_sr) params=device-config | grep " path: "` if [ -n "$pbd_device_config_path" ]; then pbd_uuid=`xe pbd-list sr-uuid=$(get_default_sr) minimal=true` pbd_path=`xe pbd-param-get uuid=$pbd_uuid param-name=device-config param-key=path || echo ""` else pbd_path="/var/run/sr-mount/$(get_default_sr)" fi if [ -d "$pbd_path" ]; then echo $pbd_path return 0 else return 1 fi } function create_directory_for_images { if [ -d "/images" ]; then echo "INFO: /images directory already exists, using that" >&2 else local local_path local_path="$(get_local_sr_path)/os-images" mkdir -p $local_path ln -s $local_path /images fi } function create_directory_for_kernels { if [ -d "/boot/guest" ]; then echo "INFO: /boot/guest directory already exists, using that" >&2 else local local_path local_path="$(get_local_sr_path)/os-guest-kernels" mkdir -p $local_path ln -s $local_path /boot/guest fi } function install_conntrack_tools { local xs_host local xs_ver_major local centos_ver local conntrack_conf xs_host=$(xe host-list --minimal) xs_ver_major=$(xe host-param-get uuid=$xs_host param-name=software-version param-key=product_version_text_short | cut -d'.' -f 1) if [ $xs_ver_major -gt 6 ]; then # Only support conntrack-tools in Dom0 with XS7.0 and above if [ ! -f /usr/sbin/conntrackd ]; then sed -i s/#baseurl=/baseurl=/g /etc/yum.repos.d/CentOS-Base.repo centos_ver=$(yum version nogroups |grep Installed | cut -d' ' -f 2 | cut -d'/' -f 1 | cut -d'-' -f 1) yum install -y --enablerepo=base --releasever=$centos_ver conntrack-tools # Backup conntrackd.conf after install conntrack-tools, use the one with statistic mode mv /etc/conntrackd/conntrackd.conf /etc/conntrackd/conntrackd.conf.back conntrack_conf=$(find /usr/share/doc -name conntrackd.conf |grep stats) cp $conntrack_conf /etc/conntrackd/conntrackd.conf fi service conntrackd restart fi } os-xenapi-0.3.1/devstack/plugin.sh0000775000175000017500000003263613160424533020220 0ustar jenkinsjenkins00000000000000#!/bin/bash # # Copyright 2016 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # MODE=$1 PHASE=$2 OS_XENAPI_DIR=$DEST/os-xenapi XS_DOM0_IPTABLES_CHAIN="XenServerDevstack" DOM0_OVSDB_PORT=${DOM0_OVSDB_PORT:-"6640"} DOM0_VXLAN_PORT=${DOM0_VXLAN_PORT:-"4789"} function get_dom0_ssh { local dom0_ip dom0_ip=$(echo "$XENAPI_CONNECTION_URL" | cut -d "/" -f 3) local ssh_dom0 ssh_dom0="sudo -u $DOMZERO_USER ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$dom0_ip" echo $ssh_dom0 return 0 } # Install Nova and Neutron Dom0 plugins function install_dom0_plugins { local ssh_dom0 ssh_dom0=$(get_dom0_ssh) local dom0_func dom0_func=`cat $OS_XENAPI_DIR/devstack/dom0_functions` local dom0_plugin_dir dom0_plugin_dir=`$ssh_dom0 "$dom0_func; set -eux; dom0_plugin_location"` # We've moved the plugins from neutron/nova to os-xenapi, but in some stable branches the # plugins are still located in neutron (ocata and backforward branches) or nova (Newton # and backforward branches). In order to upport both stable and master branches, let's # check the existing for the potential plugin directories. And copy them if exist. local plugin_dir local need_install_xenapi=False # for neutron plugins plugin_dir=$DEST/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/ if [ -d $plugin_dir ]; then need_install_xenapi=True tar -czf - -C $plugin_dir ./ | $ssh_dom0 "tar -xzf - -C $dom0_plugin_dir" fi # for nova plugins plugin_dir=$DEST/nova/plugins/xenserver/xenapi/etc/xapi.d/plugins/ if [ -d $plugin_dir ]; then need_install_xenapi=True tar -czf - -C $plugin_dir ./ | $ssh_dom0 "tar -xzf - -C $dom0_plugin_dir" fi if [ "$need_install_xenapi" = "True" ]; then # Either neutron or nova need XenAPI, install XenAPI. pip_install_gr xenapi fi # Get the path that os-xenapi is installed or the path that nova source code resides os_xenapi_dir=$(sudo -H pip show os-xenapi |grep "Location:"|cut -d " " -f 2-) if [ -n "$os_xenapi_dir" ]; then plugin_dir=$os_xenapi_dir/os_xenapi/dom0/etc/xapi.d/plugins/ if [ -d $plugin_dir ]; then tar -czf - -C $plugin_dir ./ | $ssh_dom0 "tar -xzf - -C $dom0_plugin_dir" fi fi # change plugins to be executable $ssh_dom0 "chmod a+x $dom0_plugin_dir/*" } # Config iptables in Dom0 function config_dom0_iptables { local ssh_dom0=$(get_dom0_ssh) # Remove restriction on linux bridge in Dom0 so security groups # can be applied to the interim bridge-based network. $ssh_dom0 "rm -f /etc/modprobe.d/blacklist-bridge*" # Save errexit setting _ERREXIT_XENSERVER=$(set +o | grep errexit) set +o errexit # Check Dom0 internal chain for Neutron, add if not exist $ssh_dom0 "iptables -t filter -L $XS_DOM0_IPTABLES_CHAIN" local chain_result=$? if [ "$chain_result" != "0" ]; then $ssh_dom0 "iptables -t filter --new $XS_DOM0_IPTABLES_CHAIN" $ssh_dom0 "iptables -t filter -I INPUT -j $XS_DOM0_IPTABLES_CHAIN" fi # Check iptables for remote ovsdb connection, add if not exist $ssh_dom0 "iptables -t filter -C $XS_DOM0_IPTABLES_CHAIN -p tcp -m tcp --dport $DOM0_OVSDB_PORT -j ACCEPT" local remote_conn_result=$? if [ "$remote_conn_result" != "0" ]; then $ssh_dom0 "iptables -t filter -I $XS_DOM0_IPTABLES_CHAIN -p tcp --dport $DOM0_OVSDB_PORT -j ACCEPT" fi # Check iptables for VxLAN, add if not exist $ssh_dom0 "iptables -t filter -C $XS_DOM0_IPTABLES_CHAIN -p udp -m multiport --dports $DOM0_VXLAN_PORT -j ACCEPT" local vxlan_result=$? if [ "$vxlan_result" != "0" ]; then $ssh_dom0 "iptables -t filter -I $XS_DOM0_IPTABLES_CHAIN -p udp -m multiport --dport $DOM0_VXLAN_PORT -j ACCEPT" fi # Restore errexit setting $_ERREXIT_XENSERVER } # Configure ovs agent for compute node, i.e. q-domua function config_ovs_agent { # TODO(huan): remove below line when https://review.openstack.org/#/c/435224/ merged sudo rm -f $NEUTRON_CORE_PLUGIN_CONF.domU # Make a copy of our config for domU sudo cp $NEUTRON_CORE_PLUGIN_CONF $NEUTRON_CORE_PLUGIN_CONF.domU # Change domU's config file to STACK_USER sudo chown $STACK_USER:$STACK_USER $NEUTRON_CORE_PLUGIN_CONF.domU # Configure xen configuration for neutron rootwrap.conf iniset $NEUTRON_ROOTWRAP_CONF_FILE xenapi xenapi_connection_url "$XENAPI_CONNECTION_URL" iniset $NEUTRON_ROOTWRAP_CONF_FILE xenapi xenapi_connection_username "$XENAPI_USER" iniset $NEUTRON_ROOTWRAP_CONF_FILE xenapi xenapi_connection_password "$XENAPI_PASSWORD" # Configure q-domua, use Dom0's hostname and concat suffix local ssh_dom0=$(get_dom0_ssh) local dom0_hostname=`$ssh_dom0 "hostname"` iniset $NEUTRON_CORE_PLUGIN_CONF.domU DEFAULT host "${dom0_hostname}" # Configure xenapi for q-domua to use its xenserver rootwrap daemon iniset $NEUTRON_CORE_PLUGIN_CONF.domU xenapi connection_url "$XENAPI_CONNECTION_URL" iniset $NEUTRON_CORE_PLUGIN_CONF.domU xenapi connection_username "$XENAPI_USER" iniset $NEUTRON_CORE_PLUGIN_CONF.domU xenapi connection_password "$XENAPI_PASSWORD" iniset $NEUTRON_CORE_PLUGIN_CONF.domU agent root_helper "" iniset $NEUTRON_CORE_PLUGIN_CONF.domU agent root_helper_daemon "xenapi_root_helper" # TODO(huanxie): Enable minimized polling now bug 1495423 is fixed iniset $NEUTRON_CORE_PLUGIN_CONF.domU agent minimize_polling False # Set integration bridge for ovs-agent in compute node (q-domua) iniset $NEUTRON_CORE_PLUGIN_CONF.domU ovs integration_bridge $OVS_BRIDGE # Set OVS native interface for ovs-agent in compute node (q-domua) local dom0_ip=$(echo "$XENAPI_CONNECTION_URL" | cut -d "/" -f 3) iniset $NEUTRON_CORE_PLUGIN_CONF.domU ovs ovsdb_connection tcp:$dom0_ip:$DOM0_OVSDB_PORT iniset $NEUTRON_CORE_PLUGIN_CONF.domU ovs of_listen_address $HOST_IP if [[ "$ENABLE_TENANT_VLANS" == "True" ]]; then # Create a bridge "br-$VLAN_INTERFACE" and add port _neutron_ovs_base_add_bridge "br-$VLAN_INTERFACE" sudo ovs-vsctl -- --may-exist add-port "br-$VLAN_INTERFACE" $VLAN_INTERFACE # Set bridge mapping for q-domua which is for compute node iniset $NEUTRON_CORE_PLUGIN_CONF.domU ovs bridge_mappings "physnet1:$FLAT_NETWORK_BRIDGE" # Set bridge mappings for q-agt as we have an extra bridge mapping physnet1 for domU and dom0 iniset $NEUTRON_CORE_PLUGIN_CONF ovs bridge_mappings "physnet1:br-$VLAN_INTERFACE,$PHYSICAL_NETWORK:$OVS_PHYSICAL_BRIDGE" elif [[ "$OVS_ENABLE_TUNNELING" == "True" ]]; then # Set tunnel ip for openvswitch agent in compute node (q-domua). # All q-domua's OVS commands are executed in Dom0, so the tunnel # is established between Dom0 and DomU(where DevStack runs), and # we need to set local_ip in q-domua that is used for Dom0 iniset $NEUTRON_CORE_PLUGIN_CONF.domU ovs bridge_mappings "" iniset $NEUTRON_CORE_PLUGIN_CONF.domU ovs local_ip $dom0_ip iniset $NEUTRON_CORE_PLUGIN_CONF.domU ovs tunnel_bridge $OVS_TUNNEL_BRIDGE fi } function config_nova_compute { iniset $NOVA_CONF xenserver vif_driver nova.virt.xenapi.vif.XenAPIOpenVswitchDriver iniset $NOVA_CONF xenserver ovs_integration_bridge $OVS_BRIDGE iniset $NOVA_CONF DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver # Configure nova-compute, use Dom0's hostname and concat suffix local ssh_dom0=$(get_dom0_ssh) local dom0_hostname=`$ssh_dom0 "hostname"` iniset $NOVA_CONF DEFAULT host "${dom0_hostname}" } function config_ceilometer { if is_service_enabled ceilometer-acompute; then local ssh_dom0=$(get_dom0_ssh) local dom0_hostname=`$ssh_dom0 "hostname"` iniset $CEILOMETER_CONF DEFAULT host "${dom0_hostname}" iniset $CEILOMETER_CONF DEFAULT hypervisor_inspector xenapi iniset $CEILOMETER_CONF xenapi connection_url "$XENAPI_CONNECTION_URL" iniset $CEILOMETER_CONF xenapi connection_username "$XENAPI_USER" iniset $CEILOMETER_CONF xenapi connection_password "$XENAPI_PASSWORD" # For XenAPI driver, we cannot use default value "libvirt_metadata" # https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/discovery.py#L125 iniset $CEILOMETER_CONF compute instance_discovery_method naive fi } # Start neutron-openvswitch-agent for Dom0 (q-domua) function start_ovs_agent { local config_file="--config-file $NEUTRON_CONF --config-file $NEUTRON_CORE_PLUGIN_CONF.domU" # TODO(huanxie): neutron-legacy is deprecated, checking is_neutron_legacy_enabled # can make our code more compatible with devstack future changes, see link # https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L62 if is_neutron_legacy_enabled; then # TODO(huanxie): delete below when https://review.openstack.org/#/c/435224/ merged stop_process q-domua run_process q-domua "$AGENT_BINARY $config_file" else run_process neutron-agent-dom0 "$NEUTRON_BIN_DIR/$NEUTRON_AGENT_BINARY $config_file" fi } # Stop neutron-openvswitch-agent for Dom0 (q-domua) function stop_ovs_agent { if is_neutron_legacy_enabled; then stop_process q-domua else stop_process neutron-agent-dom0 fi } function start_ceilometer_acompute { if is_service_enabled ceilometer-acompute; then run_process ceilometer-acompute "$CEILOMETER_BIN_DIR/ceilometer-polling --polling-namespaces compute --config-file $CEILOMETER_CONF" fi } # Remove Dom0 firewall rules created by this plugin function cleanup_dom0_iptables { local ssh_dom0=$(get_dom0_ssh) # Save errexit setting _ERREXIT_XENSERVER=$(set +o | grep errexit) set +o errexit $ssh_dom0 "iptables -t filter -L $XS_DOM0_IPTABLES_CHAIN" local chain_result=$? if [ "$chain_result" == "0" ]; then $ssh_dom0 "iptables -t filter -F $XS_DOM0_IPTABLES_CHAIN" $ssh_dom0 "iptables -t filter -D INPUT -j $XS_DOM0_IPTABLES_CHAIN" $ssh_dom0 "iptables -t filter -X $XS_DOM0_IPTABLES_CHAIN" fi # Restore errexit setting $_ERREXIT_XENSERVER } # Prepare directories for kernels and images in Dom0 function create_dom0_kernel_and_image_dir { local ssh_dom0=$(get_dom0_ssh) { echo "set -eux" cat $OS_XENAPI_DIR/devstack/dom0_functions echo "create_directory_for_images" echo "create_directory_for_kernels" } | $ssh_dom0 } # Install conntrack-tools in Dom0 function install_dom0_conntrack { local ssh_dom0=$(get_dom0_ssh) { echo "set -eux" cat $OS_XENAPI_DIR/devstack/dom0_functions echo "install_conntrack_tools" } | $ssh_dom0 } if [[ "$MODE" == "stack" ]]; then case "$PHASE" in pre-install) # Called after system (OS) setup is complete and before project source is installed ;; install) # Called after the layer 1 and 2 projects source and their dependencies have been installed install_dom0_plugins config_dom0_iptables install_dom0_conntrack create_dom0_kernel_and_image_dir # set image variables DEFAULT_IMAGE_NAME="cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk" DEFAULT_IMAGE_FILE_NAME="cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.vhd.tgz" IMAGE_URLS="http://ca.downloads.xensource.com/OpenStack/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.vhd.tgz" IMAGE_URLS+=",http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz" ;; post-config) # Called after the layer 1 and 2 services have been configured. # All configuration files for enabled services should exist at this point. # Configure XenServer neutron specific items for q-domua and n-cpu config_nova_compute config_ovs_agent config_ceilometer ;; extra) # Called near the end after layer 1 and 2 services have been started start_ovs_agent start_ceilometer_acompute ;; test-config) # Called at the end of devstack used to configure tempest # or any other test environments iniset $TEMPEST_CONFIG compute hypervisor_type XenServer iniset $TEMPEST_CONFIG compute volume_device_name xvdb iniset $TEMPEST_CONFIG scenario img_file $DEFAULT_IMAGE_FILE_NAME # TODO(huanxie) Maybe we can set some conf here for CI? ;; esac elif [[ "$MODE" == "unstack" ]]; then # Called by unstack.sh before other services are shut down stop_ovs_agent cleanup_dom0_iptables elif [[ "$MODE" == "clean" ]]; then # Called by clean.sh before other services are cleaned, but after unstack.sh has been called cleanup_dom0_iptables # TODO(huanxie) # clean the OVS bridge created in Dom0? fi os-xenapi-0.3.1/devstack/settings0000664000175000017500000000000013160424533020122 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/devstack/override-defaults0000664000175000017500000000007213160424533021717 0ustar jenkinsjenkins00000000000000export CIRROS_VERSION="0.3.5" export CIRROS_ARCH="x86_64" os-xenapi-0.3.1/devstack/README.rst0000664000175000017500000000066313160424533020045 0ustar jenkinsjenkins00000000000000====================== Enabling in Devstack ====================== This plugin will help to install XenServer Dom0 specific scripts into XenServer Dom0 and make the proper configuration items for Neutron OpenvSwitch agent. 1. Download DevStack 2. Add this repo as an external repository:: local.conf [[local|localrc]] enable_plugin os-xenapi https://github.com/openstack/os-xenapi.git [GITREF] 3. run ``stack.sh`` os-xenapi-0.3.1/os_xenapi.egg-info/0000775000175000017500000000000013160424745020231 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi.egg-info/dependency_links.txt0000664000175000017500000000000113160424744024276 0ustar jenkinsjenkins00000000000000 os-xenapi-0.3.1/os_xenapi.egg-info/requires.txt0000664000175000017500000000025513160424744022632 0ustar jenkinsjenkins00000000000000pbr!=2.1.0,>=2.0.0 Babel!=2.4.0,>=2.3.4 eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 oslo.concurrency>=3.20.0 oslo.log>=3.30.0 oslo.utils>=3.28.0 oslo.i18n>=3.15.3 six>=1.9.0 os-xenapi-0.3.1/os_xenapi.egg-info/SOURCES.txt0000664000175000017500000000721213160424745022117 0ustar jenkinsjenkins00000000000000.coveragerc .mailmap .testr.conf AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE Makefile README.rst babel.cfg exclusion_py3.txt requirements.txt setup.cfg setup.py test-requirements.txt tox.ini devstack/README.rst devstack/dom0_functions devstack/override-defaults devstack/plugin.sh devstack/settings doc/source/conf.py doc/source/contributing.rst doc/source/index.rst doc/source/installation.rst doc/source/readme.rst doc/source/usage.rst os_xenapi/__init__.py os_xenapi.egg-info/PKG-INFO os_xenapi.egg-info/SOURCES.txt os_xenapi.egg-info/dependency_links.txt os_xenapi.egg-info/not-zip-safe os_xenapi.egg-info/pbr.json os_xenapi.egg-info/requires.txt os_xenapi.egg-info/top_level.txt os_xenapi/client/XenAPI.py os_xenapi/client/__init__.py os_xenapi/client/disk_management.py os_xenapi/client/exception.py os_xenapi/client/host_agent.py os_xenapi/client/host_glance.py os_xenapi/client/host_management.py os_xenapi/client/host_network.py os_xenapi/client/host_xenstore.py os_xenapi/client/i18n.py os_xenapi/client/objects.py os_xenapi/client/session.py os_xenapi/client/utils.py os_xenapi/client/vm_management.py os_xenapi/client/image/__init__.py os_xenapi/client/image/vdi_handler.py os_xenapi/client/image/vhd_utils.py os_xenapi/dom0/README os_xenapi/dom0/xenapi-plugins.spec os_xenapi/dom0/etc/xapi.d/plugins/agent.py os_xenapi/dom0/etc/xapi.d/plugins/bandwidth.py os_xenapi/dom0/etc/xapi.d/plugins/config_file.py os_xenapi/dom0/etc/xapi.d/plugins/console.py os_xenapi/dom0/etc/xapi.d/plugins/dom0_plugin_version.py os_xenapi/dom0/etc/xapi.d/plugins/dom0_pluginlib.py os_xenapi/dom0/etc/xapi.d/plugins/glance.py os_xenapi/dom0/etc/xapi.d/plugins/ipxe.py os_xenapi/dom0/etc/xapi.d/plugins/kernel.py os_xenapi/dom0/etc/xapi.d/plugins/migration.py os_xenapi/dom0/etc/xapi.d/plugins/netwrap.py os_xenapi/dom0/etc/xapi.d/plugins/partition_utils.py os_xenapi/dom0/etc/xapi.d/plugins/utils.py os_xenapi/dom0/etc/xapi.d/plugins/workarounds.py os_xenapi/dom0/etc/xapi.d/plugins/xenhost.py os_xenapi/dom0/etc/xapi.d/plugins/xenstore.py os_xenapi/tests/__init__.py os_xenapi/tests/base.py os_xenapi/tests/test_os_xenapi.py os_xenapi/tests/client/__init__.py os_xenapi/tests/client/test_host_glance.py os_xenapi/tests/client/test_objects.py os_xenapi/tests/client/test_session.py os_xenapi/tests/client/test_utils.py os_xenapi/tests/client/image/__init__.py os_xenapi/tests/client/image/test_init.py os_xenapi/tests/client/image/test_vdi_handler.py os_xenapi/tests/client/image/test_vhd_utils.py os_xenapi/tests/plugins/__init__.py os_xenapi/tests/plugins/plugin_test.py os_xenapi/tests/plugins/test_agent.py os_xenapi/tests/plugins/test_bandwidth.py os_xenapi/tests/plugins/test_dom0_plugin_version.py os_xenapi/tests/plugins/test_dom0_pluginlib.py os_xenapi/tests/plugins/test_glance.py os_xenapi/tests/plugins/test_partition_utils.py os_xenapi/tests/plugins/test_xenhost.py releasenotes/notes/.placeholder releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder tools/install-devstack-xen.sh tools/install_on_xen_host.sh tools/tox_install.sh tools/install/create_ubuntu_template.sh tools/install/common/functions tools/install/conf/ubuntupreseed.cfg tools/install/conf/xenrc tools/install/devstack/install_devstack.sh tools/install/scripts/install-os-vpx.sh tools/install/scripts/install_ubuntu_template.sh tools/install/scripts/manage-vdi tools/install/scripts/on_exit.sh tools/install/scripts/persist_domU_interfaces.sh tools/install/scripts/prepare_guest.sh tools/install/scripts/prepare_guest_template.sh tools/install/scripts/ubuntu_latecommand.sh tools/install/scripts/uninstall-os-vpx.shos-xenapi-0.3.1/os_xenapi.egg-info/not-zip-safe0000664000175000017500000000000113160424731022452 0ustar jenkinsjenkins00000000000000 os-xenapi-0.3.1/os_xenapi.egg-info/pbr.json0000664000175000017500000000005613160424744021707 0ustar jenkinsjenkins00000000000000{"git_version": "7dce682", "is_release": true}os-xenapi-0.3.1/os_xenapi.egg-info/PKG-INFO0000664000175000017500000002746313160424744021341 0ustar jenkinsjenkins00000000000000Metadata-Version: 1.1 Name: os-xenapi Version: 0.3.1 Summary: XenAPI library for OpenStack projects Home-page: http://www.citrix.com Author: Citrix Author-email: openstack@citrix.com License: UNKNOWN Description-Content-Type: UNKNOWN Description: ========= os-xenapi ========= XenAPI library for OpenStack projects This library provides the support functions needed to connect to and manage a XenAPI-based hypervisor, such as Citrix's XenServer. * Free software: Apache license * Source: http://git.openstack.org/cgit/openstack/os-xenapi * Bugs: http://bugs.launchpad.net/os-xenapi Features -------- * TODO ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Install Devstack on XenServer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Getting Started With XenServer and Devstack ___________________________________________ The purpose of the code in the install directory is to help developers bootstrap a XenServer(7.0 and above) + OpenStack development environment. This guide gives some pointers on how to get started. Xenserver is a Type 1 hypervisor, so it is best installed on bare metal. The OpenStack services are configured to run within a virtual machine on the XenServer host. The VM uses the XAPI toolstack to communicate with the host over a network connection (see `MGT_BRIDGE_OR_NET_NAME`). The provided local.conf helps to build a basic devstack environment. Introduction ............ Requirements ************ - A management network with access to the internet - A DHCP server to provide addresses on this management network - XenServer 7.0 or above installed with a local EXT SR (labelled "Optimised for XenDesktop" in the installer) or a remote NFS SR This network will be used as the OpenStack management network. The VM (Tenant) Network and the Public Network will not be connected to any physical interfaces, only new virtual networks which will be created by the `install_on_xen_host.sh` script. Steps to follow *************** You should install the XenServer host first, then launch the devstack installation in one of two ways, - From a remote linux client (Recommended) - Download install-devstack-xen.sh to the linux client - Configure the local.conf contents in install-devstack-xen.sh. - Generate passwordless ssh key using "ssh-keygen -t rsa -N "" -f devstack_key.priv" - Launch script using "install-devstack-xen.sh XENSERVER mypassword devstack_key.priv" with some optional arguments - On the XenServer host - Download os-xenapi to XenServer - Create and customise a `local.conf` - Start `install_on_xen_host.sh` script Brief explanation ***************** The `install-devstack-xen.sh` script will: - Verify some pre-requisites to installation - Download os-xenapi folder to XenServer host - Generate a standard local.conf file - Call install_on_xen_host.sh to do devstack installation - Run tempest test if required The 'install_on_xen_host.sh' script will: - Verify the host configuration - Create template for devstack DomU VM if needed. Including: - Creating the named networks, if they don't exist - Preseed-Netinstall an Ubuntu Virtual Machine , with 1 network interface: - `eth0` - Connected to `UBUNTU_INST_BRIDGE_OR_NET_NAME` (which defaults to `MGT_BRIDGE_OR_NET_NAME`) - After the Ubuntu install process has finished, the network configuration is modified to: - `eth0` - Management interface, connected to `MGT_BRIDGE_OR_NET_NAME`. Note that XAPI must be accessible through this network. - `eth1` - VM interface, connected to `VM_BRIDGE_OR_NET_NAME` - `eth2` - Public interface, connected to `PUB_BRIDGE_OR_NET_NAME` - Create a template of the VM and destroy the current VM - Create DomU VM according to the template and ssh to the VM - Create a linux service to enable devstack service after VM reboot. The service will: - Download devstack source code if needed - Call unstack.sh and stack.sh to install devstack - Reboot DomU VM Step 1: Install Xenserver ......................... Install XenServer on a clean box. You can download the latest XenServer for free from: http://www.xenserver.org/ The XenServer IP configuration depends on your local network setup. If you are using dhcp, make a reservation for XenServer, so its IP address won't change over time. Make a note of the XenServer's IP address, as it has to be specified in `local.conf`. The other option is to manually specify the IP setup for the XenServer box. Please make sure, that a gateway and a nameserver is configured, as `install-devstack-xen.sh` will connect to github.com to get source-code snapshots. OpenStack currently only supports file-based (thin provisioned) SR types EXT and NFS. As such the default SR should either be a local EXT SR or a remote NFS SR. To create a local EXT SR use the "Optimised for XenDesktop" option in the XenServer host installer. Step 2: Download install-devstack-xen.sh ........................................ On your remote linux client, get the install script from https://raw.githubusercontent.com/openstack/os-xenapi/master/tools/install-devstack-xen.sh Step 3: local.conf overview ........................... Devstack uses a local.conf for user-specific configuration. install-devstack-xen provides a configuration file which is suitable for many simple use cases. In more advanced use cases, you may need to configure the local.conf file after installation - or use the second approach outlined above to bypass the install-devstack-xen script. local.conf sample:: [[local|localrc]] enable_plugin os-xenapi https://github.com/openstack/os-xenapi.git # Passwords MYSQL_PASSWORD=citrix SERVICE_TOKEN=citrix ADMIN_PASSWORD=citrix SERVICE_PASSWORD=citrix RABBIT_PASSWORD=citrix GUEST_PASSWORD=citrix XENAPI_PASSWORD="$XENSERVER_PASS" SWIFT_HASH="66a3d6b56c1f479c8b4e70ab5c2000f5" # Do not use secure delete CINDER_SECURE_DELETE=False # Compute settings VIRT_DRIVER=xenserver # Tempest settings TERMINATE_TIMEOUT=90 BUILD_TIMEOUT=600 # DevStack settings LOGDIR=${LOGDIR} LOGFILE=${LOGDIR}/stack.log # Turn on verbosity (password input does not work otherwise) VERBOSE=True # XenAPI specific XENAPI_CONNECTION_URL="http://$XENSERVER_IP" VNCSERVER_PROXYCLIENT_ADDRESS="$XENSERVER_IP" # Neutron specific part ENABLED_SERVICES+=neutron,q-domua Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan,flat Q_ML2_TENANT_NETWORK_TYPE=vxlan VLAN_INTERFACE=eth1 PUBLIC_INTERFACE=eth2 Step 4: Run `./install-devstack-xen.sh` on your remote linux client ................................................................... An example:: # Create a passwordless ssh key ssh-keygen -t rsa -N "" -f devstack_key.priv # Install devstack ./install-devstack-xen.sh XENSERVER mypassword devstack_key.priv If you don't select wait till launch (using "-w 0" option), once this script finishes executing, login the VM (DevstackOSDomU) that it installed and tail the /opt/stack/devstack_logs/stack.log file. You will need to wait until it stack.log has finished executing. Appendix ________ This section contains useful information for using specific ubuntu network mirrors, which may be required for specific environments to resolve specific access or performance issues. As these are advanced options, the "install-devstack-xen" approach does not support them. If you wish to use these options, please follow the approach outlined above which involves manually downloading os-xenapi and configuring local.conf (or xenrc in the below cases) Using a specific Ubuntu mirror for installation ............................................... To speed up the Ubuntu installation, you can use a specific mirror. To specify a mirror explicitly, include the following settings in your `xenrc` file: sample code:: UBUNTU_INST_HTTP_HOSTNAME="archive.ubuntu.com" UBUNTU_INST_HTTP_DIRECTORY="/ubuntu" These variables set the `mirror/http/hostname` and `mirror/http/directory` settings in the ubuntu preseed file. The minimal ubuntu VM will use the specified parameters. Use an http proxy to speed up Ubuntu installation ................................................. To further speed up the Ubuntu VM and package installation, an internal http proxy could be used. `squid-deb-proxy` has proven to be stable. To use an http proxy, specify the following in your `xenrc` file: sample code:: UBUNTU_INST_HTTP_PROXY="http://ubuntu-proxy.somedomain.com:8000" Exporting the Ubuntu VM to an XVA ********************************* Assuming you have an nfs export, `TEMPLATE_NFS_DIR`, the following sample code will export the jeos (just enough OS) template to an XVA that can be re-imported at a later date. sample code:: TEMPLATE_FILENAME=devstack-jeos.xva TEMPLATE_NAME=jeos_template_for_ubuntu mountdir=$(mktemp -d) mount -t nfs "$TEMPLATE_NFS_DIR" "$mountdir" VM="$(xe template-list name-label="$TEMPLATE_NAME" --minimal)" xe template-export template-uuid=$VM filename="$mountdir/$TEMPLATE_FILENAME" umount "$mountdir" rm -rf "$mountdir" Import the Ubuntu VM ******************** Given you have an nfs export `TEMPLATE_NFS_DIR` where you exported the Ubuntu VM as `TEMPLATE_FILENAME`: sample code:: mountdir=$(mktemp -d) mount -t nfs "$TEMPLATE_NFS_DIR" "$mountdir" xe vm-import filename="$mountdir/$TEMPLATE_FILENAME" umount "$mountdir" rm -rf "$mountdir" Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.5 os-xenapi-0.3.1/os_xenapi.egg-info/top_level.txt0000664000175000017500000000001213160424744022753 0ustar jenkinsjenkins00000000000000os_xenapi os-xenapi-0.3.1/os_xenapi/0000775000175000017500000000000013160424745016537 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/__init__.py0000664000175000017500000000123113160424533020640 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version __version__ = pbr.version.VersionInfo( 'os_xenapi').version_string() os-xenapi-0.3.1/os_xenapi/dom0/0000775000175000017500000000000013160424745017376 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/dom0/xenapi-plugins.spec0000664000175000017500000000307313160424533023213 0ustar jenkinsjenkins00000000000000Name: xenapi-plugins Version: %{version} Release: 1 Summary: Files for XenAPI support. License: ASL 2.0 Group: Applications/Utilities Source0: xenapi-plugins-%{version}.tar.gz BuildArch: noarch BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) %define debug_package %{nil} %description This package contains files that are required for XenAPI support for OpenStack. %prep %setup -q -n xenapi-plugins %install rm -rf $RPM_BUILD_ROOT mkdir -p $RPM_BUILD_ROOT/etc rsync -avz --exclude '*.pyc' --exclude '*.pyo' xapi.d $RPM_BUILD_ROOT/etc chmod a+x $RPM_BUILD_ROOT/etc/xapi.d/plugins/* %clean rm -rf $RPM_BUILD_ROOT %post set -eu default_sr="$(xe pool-list params=default-SR minimal=true)" if [ -z "$default_sr" ]; then echo "Failed to get the default SR" >&2 exit 1 fi sr_mount_dir="/var/run/sr-mount/$default_sr" if ! [ -d "$sr_mount_dir" ]; then echo "Cannot find the folder that sr mount" >&2 exit 0 fi if ! [ -d /images ]; then os_images_dir="$sr_mount_dir/os-images" echo "Creating /images" >&2 if ! [ -d "$os_images_dir" ]; then echo "Creating $os_images_dir" >&2 mkdir -p "$os_images_dir" fi echo "Setting up symlink: /images -> $os_images_dir" >&2 ln -s "$os_images_dir" /images fi images_dev=$(stat -c %d "/images/") sr_dev=$(stat -c %d "$sr_mount_dir/") if [ "$images_dev" != "$sr_dev" ]; then echo "ERROR: /images/ and the default SR are on different devices" exit 1 fi %files %defattr(-,root,root,-) /etc/xapi.d/plugins/* os-xenapi-0.3.1/os_xenapi/dom0/README0000664000175000017500000000105213160424533020247 0ustar jenkinsjenkins00000000000000This directory contains files that are required for the XenAPI support. They should be installed in the XenServer / Xen Cloud Platform dom0. If you install them manually, you will need to ensure that the newly added files are executable. You can do this by running the following command (from dom0): chmod a+x /etc/xapi.d/plugins/* Otherwise, you can build rpm package by running the following command: cd $OS_XENAPI_ROOT make rpm and install the package by running the following command in dom0: rpm -i xenapi-plugins-*.noarch.rpm os-xenapi-0.3.1/os_xenapi/dom0/etc/0000775000175000017500000000000013160424745020151 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/0000775000175000017500000000000013160424745021334 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/0000775000175000017500000000000013160424745023015 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/dom0_plugin_version.py0000664000175000017500000000332313160424533027345 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2013 OpenStack Foundation # Copyright (c) 2013 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features """Returns the version of the nova plugins""" import utils # MAJOR VERSION: Incompatible changes # MINOR VERSION: Compatible changes, new plugins, etc # NOTE(sfinucan): 2.0 will be equivalent to the last in the 1.x stream # 1.0 - Initial version. # 1.1 - New call to check GC status # 1.2 - Added support for pci passthrough devices # 1.3 - Add vhd2 functions for doing glance operations by url # 1.4 - Add support of Glance v2 api # 1.5 - Added function for network configuration on ovs bridge # 1.6 - Add function for network configuration on Linux bridge # 1.7 - Add Partition utilities plugin # 1.8 - Add support for calling plug-ins with the .py suffix # 2.0 - Remove plugin files which don't have .py suffix # 2.1 - Add interface ovs_create_port in xenhost.py PLUGIN_VERSION = "2.1" def get_version(session): return PLUGIN_VERSION if __name__ == '__main__': utils.register_plugin_calls(get_version) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/partition_utils.py0000664000175000017500000000754113160424533026622 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features from distutils.version import StrictVersion import logging import os import re import time import dom0_pluginlib as pluginlib import utils pluginlib.configure_logging("disk_utils") def wait_for_dev(session, dev_path, max_seconds): for i in range(0, max_seconds): if os.path.exists(dev_path): return dev_path time.sleep(1) return "" def _get_sfdisk_version(): out = utils.run_command(['/sbin/sfdisk', '-v']) if out: # Return the first two numbers from the version. # In XS6.5, it's 2.13-pre7. Just return 2.13 for this case. pattern = re.compile("(\d+)\.(\d+)") match = pattern.search(out.split('\n')[0]) if match: return match.group(0) def make_partition(session, dev, partition_start, partition_end): # Since XS7.0 which has sfdisk V2.23, we observe sfdisk has a bug # that sfdisk will wrongly calculate cylinders when specify Sector # as unit (-uS). That bug will cause the partition operation failed. # And that's fixed in 2.26. So as a workaround, let's use the option # of '--force' for version <=2.25 and >=2.23. '--force' will ignore # the wrong cylinder value but works as expected. VER_FORCE_MIN = '2.23' VER_FORCE_MAX = '2.25' dev_path = utils.make_dev_path(dev) if partition_end != "-": raise pluginlib.PluginError("Can only create unbounded partitions") sfdisk_ver = _get_sfdisk_version() cmd_list = ['sfdisk', '-uS', dev_path] if sfdisk_ver: if StrictVersion(sfdisk_ver) >= StrictVersion(VER_FORCE_MIN) and \ StrictVersion(sfdisk_ver) <= StrictVersion(VER_FORCE_MAX): cmd_list = ['sfdisk', '--force', '-uS', dev_path] utils.run_command(cmd_list, '%s,;\n' % (partition_start)) def _mkfs(fs, path, label): """Format a file or block device :param fs: Filesystem type (only 'swap', 'ext3' supported) :param path: Path to file or block device to format :param label: Volume label to use """ if fs == 'swap': args = ['mkswap'] elif fs == 'ext3': args = ['mkfs', '-t', fs] # add -F to force no interactive execute on non-block device. args.extend(['-F']) if label: args.extend(['-L', label]) else: raise pluginlib.PluginError("Partition type %s not supported" % fs) args.append(path) utils.run_command(args) def mkfs(session, dev, partnum, fs_type, fs_label): dev_path = utils.make_dev_path(dev) out = utils.run_command(['kpartx', '-avspp', dev_path]) try: logging.info('kpartx output: %s' % out) mapperdir = os.path.join('/dev', 'mapper') dev_base = os.path.basename(dev) partition_path = os.path.join(mapperdir, "%sp%s" % (dev_base, partnum)) _mkfs(fs_type, partition_path, fs_label) finally: # Always remove partitions otherwise we can't unplug the VBD utils.run_command(['kpartx', '-dvspp', dev_path]) if __name__ == "__main__": utils.register_plugin_calls(wait_for_dev, make_partition, mkfs) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/xenhost.py0000664000175000017500000005210213160424533025052 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright 2011 OpenStack Foundation # Copyright 2011 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # TODO(sfinucan): Resolve all 'noqa' items once the above is no longer true # # XenAPI plugin for host operations # try: import json except ImportError: import simplejson as json import logging import re import sys import time import utils import dom0_pluginlib as pluginlib import XenAPI import XenAPIPlugin try: import xmlrpclib except ImportError: import six.moves.xmlrpc_client as xmlrpclib pluginlib.configure_logging("xenhost") host_data_pattern = re.compile(r"\s*(\S+) \([^\)]+\) *: ?(.*)") config_file_path = "/usr/etc/xenhost.conf" DEFAULT_TRIES = 23 DEFAULT_SLEEP = 10 def jsonify(fnc): def wrapper(*args, **kwargs): return json.dumps(fnc(*args, **kwargs)) return wrapper class TimeoutError(StandardError): pass def _run_command(cmd, cmd_input=None): """Wrap utils.run_command to raise PluginError on failure""" try: return utils.run_command(cmd, cmd_input=cmd_input) except utils.SubprocessException, e: # noqa raise pluginlib.PluginError(e.err) def _resume_compute(session, compute_ref, compute_uuid): """Resume compute node on slave host after pool join. This has to happen regardless of the success or failure of the join operation. """ try: # session is valid if the join operation has failed session.xenapi.VM.start(compute_ref, False, True) except XenAPI.Failure: # if session is invalid, e.g. xapi has restarted, then the pool # join has been successful, wait for xapi to become alive again for c in range(0, DEFAULT_TRIES): try: _run_command(["xe", "vm-start", "uuid=%s" % compute_uuid]) return except pluginlib.PluginError: logging.exception('Waited %d seconds for the slave to ' 'become available.' % (c * DEFAULT_SLEEP)) time.sleep(DEFAULT_SLEEP) raise pluginlib.PluginError('Unrecoverable error: the host has ' 'not come back for more than %d seconds' % (DEFAULT_SLEEP * (DEFAULT_TRIES + 1))) @jsonify def set_host_enabled(self, arg_dict): """Sets this host's ability to accept new instances. It will otherwise continue to operate normally. """ enabled = arg_dict.get("enabled") if enabled is None: raise pluginlib.PluginError( "Missing 'enabled' argument to set_host_enabled") host_uuid = arg_dict['host_uuid'] if enabled == "true": result = _run_command(["xe", "host-enable", "uuid=%s" % host_uuid]) elif enabled == "false": result = _run_command(["xe", "host-disable", "uuid=%s" % host_uuid]) else: raise pluginlib.PluginError("Illegal enabled status: %s" % enabled) # Should be empty string if result: raise pluginlib.PluginError(result) # Return the current enabled status cmd = ["xe", "host-param-get", "uuid=%s" % host_uuid, "param-name=enabled"] host_enabled = _run_command(cmd) if host_enabled == "true": status = "enabled" else: status = "disabled" return {"status": status} def _write_config_dict(dct): conf_file = file(config_file_path, "w") json.dump(dct, conf_file) conf_file.close() def _get_config_dict(): """Returns a dict containing the key/values in the config file. If the file doesn't exist, it is created, and an empty dict is returned. """ try: conf_file = file(config_file_path) config_dct = json.load(conf_file) conf_file.close() except IOError: # File doesn't exist config_dct = {} # Create the file _write_config_dict(config_dct) return config_dct @jsonify def get_config(self, arg_dict): """Return the value stored for the specified key, or None if no match.""" conf = _get_config_dict() params = arg_dict["params"] try: dct = json.loads(params) except Exception: dct = params key = dct["key"] ret = conf.get(key) if ret is None: # Can't jsonify None return "None" return ret @jsonify def set_config(self, arg_dict): """Write the specified key/value pair, overwriting any existing value.""" conf = _get_config_dict() params = arg_dict["params"] try: dct = json.loads(params) except Exception: dct = params key = dct["key"] val = dct["value"] if val is None: # Delete the key, if present conf.pop(key, None) else: conf.update({key: val}) _write_config_dict(conf) def iptables_config(session, args): # command should be either save or restore logging.debug("iptables_config:enter") logging.debug("iptables_config: args=%s", args) cmd_args = pluginlib.exists(args, 'cmd_args') logging.debug("iptables_config: cmd_args=%s", cmd_args) process_input = pluginlib.optional(args, 'process_input') logging.debug("iptables_config: process_input=%s", process_input) cmd = json.loads(cmd_args) cmd = map(str, cmd) # either execute iptable-save or iptables-restore # command must be only one of these two # process_input must be used only with iptables-restore if len(cmd) > 0 and cmd[0] in ('iptables-save', 'iptables-restore', 'ip6tables-save', 'ip6tables-restore'): result = _run_command(cmd, process_input) ret_str = json.dumps(dict(out=result, err='')) logging.debug("iptables_config:exit") return ret_str # else don't do anything and return an error else: raise pluginlib.PluginError("Invalid iptables command") def _ovs_add_patch_port(args): bridge_name = pluginlib.exists(args, 'bridge_name') port_name = pluginlib.exists(args, 'port_name') peer_port_name = pluginlib.exists(args, 'peer_port_name') cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', port_name, '--', 'add-port', bridge_name, port_name, '--', 'set', 'interface', port_name, 'type=patch', 'options:peer=%s' % peer_port_name] return _run_command(cmd_args) def _ovs_del_port(args): bridge_name = pluginlib.exists(args, 'bridge_name') port_name = pluginlib.exists(args, 'port_name') cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', bridge_name, port_name] return _run_command(cmd_args) def _ovs_del_br(args): bridge_name = pluginlib.exists(args, 'bridge_name') cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-br', bridge_name] return _run_command(cmd_args) def _ovs_set_if_external_id(args): interface = pluginlib.exists(args, 'interface') extneral_id = pluginlib.exists(args, 'extneral_id') value = pluginlib.exists(args, 'value') cmd_args = ['ovs-vsctl', 'set', 'Interface', interface, 'external-ids:%s=%s' % (extneral_id, value)] return _run_command(cmd_args) def _ovs_add_port(args): bridge_name = pluginlib.exists(args, 'bridge_name') port_name = pluginlib.exists(args, 'port_name') cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', port_name, '--', 'add-port', bridge_name, port_name] return _run_command(cmd_args) def _ovs_create_port(args): bridge = pluginlib.exists(args, 'bridge') port = pluginlib.exists(args, 'port') iface_id = pluginlib.exists(args, 'iface-id') mac = pluginlib.exists(args, 'mac') status = pluginlib.exists(args, 'status') cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', port, '--', 'add-port', bridge, port, '--', 'set', 'Interface', port, 'external_ids:iface-id=%s' % iface_id, 'external_ids:iface-status=%s' % status, 'external_ids:attached-mac=%s' % mac, 'external_ids:xs-vif-uuid=%s' % iface_id] return _run_command(cmd_args) def _ip_link_get_dev(args): device_name = pluginlib.exists(args, 'device_name') cmd_args = ['ip', 'link', 'show', device_name] return _run_command(cmd_args) def _ip_link_del_dev(args): device_name = pluginlib.exists(args, 'device_name') cmd_args = ['ip', 'link', 'delete', device_name] return _run_command(cmd_args) def _ip_link_add_veth_pair(args): dev1_name = pluginlib.exists(args, 'dev1_name') dev2_name = pluginlib.exists(args, 'dev2_name') cmd_args = ['ip', 'link', 'add', dev1_name, 'type', 'veth', 'peer', 'name', dev2_name] return _run_command(cmd_args) def _ip_link_set_dev(args): device_name = pluginlib.exists(args, 'device_name') option = pluginlib.exists(args, 'option') cmd_args = ['ip', 'link', 'set', device_name, option] return _run_command(cmd_args) def _ip_link_set_promisc(args): device_name = pluginlib.exists(args, 'device_name') option = pluginlib.exists(args, 'option') cmd_args = ['ip', 'link', 'set', device_name, 'promisc', option] return _run_command(cmd_args) def _brctl_add_br(args): bridge_name = pluginlib.exists(args, 'bridge_name') cmd_args = ['brctl', 'addbr', bridge_name] return _run_command(cmd_args) def _brctl_del_br(args): bridge_name = pluginlib.exists(args, 'bridge_name') cmd_args = ['brctl', 'delbr', bridge_name] return _run_command(cmd_args) def _brctl_set_fd(args): bridge_name = pluginlib.exists(args, 'bridge_name') fd = pluginlib.exists(args, 'fd') cmd_args = ['brctl', 'setfd', bridge_name, fd] return _run_command(cmd_args) def _brctl_set_stp(args): bridge_name = pluginlib.exists(args, 'bridge_name') option = pluginlib.exists(args, 'option') cmd_args = ['brctl', 'stp', bridge_name, option] return _run_command(cmd_args) def _brctl_add_if(args): bridge_name = pluginlib.exists(args, 'bridge_name') if_name = pluginlib.exists(args, 'interface_name') cmd_args = ['brctl', 'addif', bridge_name, if_name] return _run_command(cmd_args) def _brctl_del_if(args): bridge_name = pluginlib.exists(args, 'bridge_name') if_name = pluginlib.exists(args, 'interface_name') cmd_args = ['brctl', 'delif', bridge_name, if_name] return _run_command(cmd_args) ALLOWED_NETWORK_CMDS = { # allowed cmds to config OVS bridge 'ovs_add_patch_port': _ovs_add_patch_port, 'ovs_add_port': _ovs_add_port, 'ovs_create_port': _ovs_create_port, 'ovs_del_port': _ovs_del_port, 'ovs_del_br': _ovs_del_br, 'ovs_set_if_external_id': _ovs_set_if_external_id, 'ip_link_add_veth_pair': _ip_link_add_veth_pair, 'ip_link_del_dev': _ip_link_del_dev, 'ip_link_get_dev': _ip_link_get_dev, 'ip_link_set_dev': _ip_link_set_dev, 'ip_link_set_promisc': _ip_link_set_promisc, 'brctl_add_br': _brctl_add_br, 'brctl_add_if': _brctl_add_if, 'brctl_del_br': _brctl_del_br, 'brctl_del_if': _brctl_del_if, 'brctl_set_fd': _brctl_set_fd, 'brctl_set_stp': _brctl_set_stp } def network_config(session, args): """network config functions""" cmd = pluginlib.exists(args, 'cmd') if not isinstance(cmd, basestring): msg = "invalid command '%s'" % str(cmd) raise pluginlib.PluginError(msg) return if cmd not in ALLOWED_NETWORK_CMDS: msg = "Dom0 execution of '%s' is not permitted" % cmd raise pluginlib.PluginError(msg) return cmd_args = pluginlib.exists(args, 'args') return ALLOWED_NETWORK_CMDS[cmd](cmd_args) def _power_action(action, arg_dict): # Host must be disabled first host_uuid = arg_dict['host_uuid'] result = _run_command(["xe", "host-disable", "uuid=%s" % host_uuid]) if result: raise pluginlib.PluginError(result) # All running VMs must be shutdown result = _run_command(["xe", "vm-shutdown", "--multiple", "resident-on=%s" % host_uuid]) if result: raise pluginlib.PluginError(result) cmds = {"reboot": "host-reboot", "startup": "host-power-on", "shutdown": "host-shutdown"} result = _run_command(["xe", cmds[action], "uuid=%s" % host_uuid]) # Should be empty string if result: raise pluginlib.PluginError(result) return {"power_action": action} @jsonify def host_reboot(self, arg_dict): """Reboots the host.""" return _power_action("reboot", arg_dict) @jsonify def host_shutdown(self, arg_dict): """Reboots the host.""" return _power_action("shutdown", arg_dict) @jsonify def host_start(self, arg_dict): """Starts the host. Currently not feasible, since the host runs on the same machine as Xen. """ return _power_action("startup", arg_dict) @jsonify def host_join(self, arg_dict): """Join a remote host into a pool. The pool's master is the host where the plugin is called from. The following constraints apply: - The host must have no VMs running, except nova-compute, which will be shut down (and restarted upon pool-join) automatically, - The host must have no shared storage currently set up, - The host must have the same license of the master, - The host must have the same supplemental packs as the master. """ session = XenAPI.Session(arg_dict.get("url")) session.login_with_password(arg_dict.get("user"), arg_dict.get("password")) compute_ref = session.xenapi.VM.get_by_uuid(arg_dict.get('compute_uuid')) session.xenapi.VM.clean_shutdown(compute_ref) try: if arg_dict.get("force", "false") == "false": session.xenapi.pool.join(arg_dict.get("master_addr"), arg_dict.get("master_user"), arg_dict.get("master_pass")) else: session.xenapi.pool.join_force(arg_dict.get("master_addr"), arg_dict.get("master_user"), arg_dict.get("master_pass")) finally: _resume_compute(session, compute_ref, arg_dict.get("compute_uuid")) @jsonify def host_data(self, arg_dict): # Runs the commands on the xenstore host to return the current status # information. host_uuid = arg_dict['host_uuid'] resp = _run_command(["xe", "host-param-list", "uuid=%s" % host_uuid]) parsed_data = parse_response(resp) # We have the raw dict of values. Extract those that we need, # and convert the data types as needed. ret_dict = cleanup(parsed_data) # Add any config settings config = _get_config_dict() ret_dict.update(config) return ret_dict def parse_response(resp): data = {} for ln in resp.splitlines(): if not ln: continue mtch = host_data_pattern.match(ln.strip()) try: k, v = mtch.groups() data[k] = v except AttributeError: # Not a valid line; skip it continue return data @jsonify def host_uptime(self, arg_dict): """Returns the result of the uptime command on the xenhost.""" return {"uptime": _run_command(['uptime'])} def cleanup(dct): # Take the raw KV pairs returned and translate them into the # appropriate types, discarding any we don't need. def safe_int(val): # Integer values will either be string versions of numbers, # or empty strings. Convert the latter to nulls. try: return int(val) except ValueError: return None def strip_kv(ln): return [val.strip() for val in ln.split(":", 1)] out = {} # sbs = dct.get("supported-bootloaders", "") # out["host_supported-bootloaders"] = sbs.split("; ") # out["host_suspend-image-sr-uuid"] = dct.get("suspend-image-sr-uuid", "") # out["host_crash-dump-sr-uuid"] = dct.get("crash-dump-sr-uuid", "") # out["host_local-cache-sr"] = dct.get("local-cache-sr", "") out["enabled"] = dct.get("enabled", "true") == "true" omm = {} omm["total"] = safe_int(dct.get("memory-total", "")) omm["overhead"] = safe_int(dct.get("memory-overhead", "")) omm["free"] = safe_int(dct.get("memory-free", "")) omm["free-computed"] = safe_int(dct.get("memory-free-computed", "")) out["host_memory"] = omm # out["host_API-version"] = avv = {} # avv["vendor"] = dct.get("API-version-vendor", "") # avv["major"] = safe_int(dct.get("API-version-major", "")) # avv["minor"] = safe_int(dct.get("API-version-minor", "")) out["enabled"] = dct.get("enabled", True) out["host_uuid"] = dct.get("uuid", None) out["host_name-label"] = dct.get("name-label", "") out["host_name-description"] = dct.get("name-description", "") # out["host_host-metrics-live"] = dct.get( # "host-metrics-live", "false") == "true" out["host_hostname"] = dct.get("hostname", "") out["host_ip_address"] = dct.get("address", "") oc = dct.get("other-config", "") ocd = {} if oc: for oc_fld in oc.split("; "): ock, ocv = strip_kv(oc_fld) ocd[ock] = ocv out["host_other-config"] = ocd capabilities = dct.get("capabilities", "") out["host_capabilities"] = capabilities.replace(";", "").split() # out["host_allowed-operations"] = dct.get( # "allowed-operations", "").split("; ") # lsrv = dct.get("license-server", "") # out["host_license-server"] = ols = {} # if lsrv: # for lspart in lsrv.split("; "): # lsk, lsv = lspart.split(": ") # if lsk == "port": # ols[lsk] = safe_int(lsv) # else: # ols[lsk] = lsv # sv = dct.get("software-version", "") # out["host_software-version"] = osv = {} # if sv: # for svln in sv.split("; "): # svk, svv = strip_kv(svln) # osv[svk] = svv cpuinf = dct.get("cpu_info", "") ocp = {} if cpuinf: for cpln in cpuinf.split("; "): cpk, cpv = strip_kv(cpln) if cpk in ("cpu_count", "family", "model", "stepping"): ocp[cpk] = safe_int(cpv) else: ocp[cpk] = cpv out["host_cpu_info"] = ocp # out["host_edition"] = dct.get("edition", "") # out["host_external-auth-service-name"] = dct.get( # "external-auth-service-name", "") return out def query_gc(session, sr_uuid, vdi_uuid): result = _run_command(["/opt/xensource/sm/cleanup.py", "-q", "-u", sr_uuid]) # Example output: "Currently running: True" return result[19:].strip() == "True" def get_pci_device_details(session): """Returns a string that is a list of pci devices with details. This string is obtained by running the command lspci. With -vmm option, it dumps PCI device data in machine readable form. This verbose format display a sequence of records separated by a blank line. We will also use option "-n" to get vendor_id and device_id as numeric values and the "-k" option to get the kernel driver used if any. """ return _run_command(["lspci", "-vmmnk"]) def get_pci_type(session, pci_device): """Returns the type of the PCI device (type-PCI, type-VF or type-PF). pci-device -- The address of the pci device """ # We need to add the domain if it is missing if pci_device.count(':') == 1: pci_device = "0000:" + pci_device output = _run_command(["ls", "/sys/bus/pci/devices/" + pci_device + "/"]) if "physfn" in output: return "type-VF" if "virtfn" in output: return "type-PF" return "type-PCI" if __name__ == "__main__": # Support both serialized and non-serialized plugin approaches _, methodname = xmlrpclib.loads(sys.argv[1]) if methodname in ['query_gc', 'get_pci_device_details', 'get_pci_type', 'network_config']: utils.register_plugin_calls(query_gc, get_pci_device_details, get_pci_type, network_config) XenAPIPlugin.dispatch( {"host_data": host_data, "set_host_enabled": set_host_enabled, "host_shutdown": host_shutdown, "host_reboot": host_reboot, "host_start": host_start, "host_join": host_join, "get_config": get_config, "set_config": set_config, "iptables_config": iptables_config, "host_uptime": host_uptime}) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/ipxe.py0000664000175000017500000001020113160424533024321 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # TODO(sfinucan): Resolve all 'noqa' items once the above is no longer true """Inject network configuration into iPXE ISO for boot.""" import logging import os import shutil import utils # FIXME(sirp): should this use pluginlib from 5.6? import dom0_pluginlib dom0_pluginlib.configure_logging('ipxe') ISOLINUX_CFG = """SAY iPXE ISO boot image TIMEOUT 30 DEFAULT ipxe.krn LABEL ipxe.krn KERNEL ipxe.krn INITRD netcfg.ipxe """ NETCFG_IPXE = """#!ipxe :start imgfree ifclose net0 set net0/ip %(ip_address)s set net0/netmask %(netmask)s set net0/gateway %(gateway)s set dns %(dns)s ifopen net0 goto menu :menu chain %(boot_menu_url)s goto boot :boot sanboot --no-describe --drive 0x80 """ def _write_file(filename, data): # If the ISO was tampered with such that the destination is a symlink, # that could allow a malicious user to write to protected areas of the # dom0 filesystem. /HT to comstud for pointing this out. # # Short-term, checking that the destination is not a symlink should be # sufficient. # # Long-term, we probably want to perform all file manipulations within a # chroot jail to be extra safe. if os.path.islink(filename): raise RuntimeError('SECURITY: Cannot write to symlinked destination') logging.debug("Writing to file '%s'" % filename) f = open(filename, 'w') try: f.write(data) finally: f.close() def _unbundle_iso(sr_path, filename, path): logging.debug("Unbundling ISO '%s'" % filename) read_only_path = utils.make_staging_area(sr_path) try: utils.run_command(['mount', '-o', 'loop', filename, read_only_path]) try: shutil.copytree(read_only_path, path) finally: utils.run_command(['umount', read_only_path]) finally: utils.cleanup_staging_area(read_only_path) def _create_iso(mkisofs_cmd, filename, path): logging.debug("Creating ISO '%s'..." % filename) orig_dir = os.getcwd() os.chdir(path) try: utils.run_command([mkisofs_cmd, '-quiet', '-l', '-o', filename, '-c', 'boot.cat', '-b', 'isolinux.bin', '-no-emul-boot', '-boot-load-size', '4', '-boot-info-table', '.']) finally: os.chdir(orig_dir) def inject(session, sr_path, vdi_uuid, boot_menu_url, ip_address, netmask, gateway, dns, mkisofs_cmd): iso_filename = '%s.img' % os.path.join(sr_path, 'iso', vdi_uuid) # Create staging area so we have a unique path but remove it since # shutil.copytree will recreate it staging_path = utils.make_staging_area(sr_path) utils.cleanup_staging_area(staging_path) try: _unbundle_iso(sr_path, iso_filename, staging_path) # Write Configs _write_file(os.path.join(staging_path, 'netcfg.ipxe'), NETCFG_IPXE % {"ip_address": ip_address, "netmask": netmask, "gateway": gateway, "dns": dns, "boot_menu_url": boot_menu_url}) _write_file(os.path.join(staging_path, 'isolinux.cfg'), ISOLINUX_CFG) _create_iso(mkisofs_cmd, iso_filename, staging_path) finally: utils.cleanup_staging_area(staging_path) if __name__ == "__main__": utils.register_plugin_calls(inject) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/xenstore.py0000664000175000017500000001637013160424533025240 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2010 Citrix Systems, Inc. # Copyright 2010 OpenStack Foundation # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # # XenAPI plugin for reading/writing information to xenstore # try: import json except ImportError: import simplejson as json import utils # noqa import XenAPIPlugin # noqa import dom0_pluginlib as pluginlib # noqa pluginlib.configure_logging("xenstore") class XenstoreError(pluginlib.PluginError): """Errors that occur when calling xenstore-* through subprocesses.""" def __init__(self, cmd, return_code, stderr, stdout): msg = "cmd: %s; returncode: %d; stderr: %s; stdout: %s" msg = msg % (cmd, return_code, stderr, stdout) self.cmd = cmd self.return_code = return_code self.stderr = stderr self.stdout = stdout pluginlib.PluginError.__init__(self, msg) def jsonify(fnc): def wrapper(*args, **kwargs): ret = fnc(*args, **kwargs) try: json.loads(ret) except ValueError: # Value should already be JSON-encoded, but some operations # may write raw sting values; this will catch those and # properly encode them. ret = json.dumps(ret) return ret return wrapper def record_exists(arg_dict): """Returns whether or not the given record exists. The record path is determined from the given path and dom_id in the arg_dict. """ cmd = ["xenstore-exists", "/local/domain/%(dom_id)s/%(path)s" % arg_dict] try: _run_command(cmd) return True except XenstoreError, e: # noqa if e.stderr == '': # if stderr was empty, this just means the path did not exist return False # otherwise there was a real problem raise @jsonify def read_record(self, arg_dict): """Returns the value stored at the given path for the given dom_id. These must be encoded as key/value pairs in arg_dict. You can optionally include a key 'ignore_missing_path'; if this is present and boolean True, attempting to read a non-existent path will return the string 'None' instead of raising an exception. """ cmd = ["xenstore-read", "/local/domain/%(dom_id)s/%(path)s" % arg_dict] try: result = _run_command(cmd) return result.strip() except XenstoreError, e: # noqa if not arg_dict.get("ignore_missing_path", False): raise if not record_exists(arg_dict): return "None" # Just try again in case the agent write won the race against # the record_exists check. If this fails again, it will likely raise # an equally meaningful XenstoreError as the one we just caught result = _run_command(cmd) return result.strip() @jsonify def write_record(self, arg_dict): """Writes to xenstore at the specified path. If there is information already stored in that location, it is overwritten. As in read_record, the dom_id and path must be specified in the arg_dict; additionally, you must specify a 'value' key, whose value must be a string. Typically, you can json-ify more complex values and store the json output. """ cmd = ["xenstore-write", "/local/domain/%(dom_id)s/%(path)s" % arg_dict, arg_dict["value"]] _run_command(cmd) return arg_dict["value"] @jsonify def list_records(self, arg_dict): """Returns all stored data at or below the given path for the given dom_id. The data is returned as a json-ified dict, with the path as the key and the stored value as the value. If the path doesn't exist, an empty dict is returned. """ dirpath = "/local/domain/%(dom_id)s/%(path)s" % arg_dict cmd = ["xenstore-ls", dirpath.rstrip("/")] try: recs = _run_command(cmd) except XenstoreError, e: # noqa if not record_exists(arg_dict): return {} # Just try again in case the path was created in between # the "ls" and the existence check. If this fails again, it will # likely raise an equally meaningful XenstoreError recs = _run_command(cmd) base_path = arg_dict["path"] paths = _paths_from_ls(recs) ret = {} for path in paths: if base_path: arg_dict["path"] = "%s/%s" % (base_path, path) else: arg_dict["path"] = path rec = read_record(self, arg_dict) try: val = json.loads(rec) except ValueError: val = rec ret[path] = val return ret @jsonify def delete_record(self, arg_dict): """Just like it sounds: it removes the record for the specified VM and the specified path from xenstore. """ cmd = ["xenstore-rm", "/local/domain/%(dom_id)s/%(path)s" % arg_dict] try: return _run_command(cmd) except XenstoreError, e: # noqa if 'could not remove path' in e.stderr: # Entry already gone. We're good to go. return '' raise def _paths_from_ls(recs): """The xenstore-ls command returns a listing that isn't terribly useful. This method cleans that up into a dict with each path as the key, and the associated string as the value. """ last_nm = "" level = 0 path = [] ret = [] for ln in recs.splitlines(): nm, val = ln.rstrip().split(" = ") barename = nm.lstrip() this_level = len(nm) - len(barename) if this_level == 0: ret.append(barename) level = 0 path = [] elif this_level == level: # child of same parent ret.append("%s/%s" % ("/".join(path), barename)) elif this_level > level: path.append(last_nm) ret.append("%s/%s" % ("/".join(path), barename)) level = this_level elif this_level < level: path = path[:this_level] ret.append("%s/%s" % ("/".join(path), barename)) level = this_level last_nm = barename return ret def _run_command(cmd): """Wrap utils.run_command to raise XenstoreError on failure""" try: return utils.run_command(cmd) except utils.SubprocessException, e: # noqa raise XenstoreError(e.cmdline, e.ret, e.err, e.out) if __name__ == "__main__": XenAPIPlugin.dispatch( {"read_record": read_record, "write_record": write_record, "list_records": list_records, "delete_record": delete_record}) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/netwrap.py0000664000175000017500000000510313160424533025041 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright 2012 OpenStack Foundation # Copyright 2012 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # XenAPI plugin for executing network commands (ovs, iptables, etc) on dom0 # import gettext gettext.install('neutron', unicode=1) try: import json except ImportError: import simplejson as json import subprocess import XenAPIPlugin ALLOWED_CMDS = [ 'ip', 'ipset', 'iptables-save', 'iptables-restore', 'ip6tables-save', 'ip6tables-restore', 'sysctl', # NOTE(yamamoto): of_interface=native doesn't use ovs-ofctl 'ovs-ofctl', 'ovs-vsctl', 'ovsdb-client', 'conntrack', ] class PluginError(Exception): """Base Exception class for all plugin errors.""" def __init__(self, *args): Exception.__init__(self, *args) def _run_command(cmd, cmd_input): """Abstracts out the basics of issuing system commands. If the command returns anything in stderr, a PluginError is raised with that information. Otherwise, the output from stdout is returned """ pipe = subprocess.PIPE proc = subprocess.Popen(cmd, shell=False, stdin=pipe, stdout=pipe, stderr=pipe, close_fds=True) (out, err) = proc.communicate(cmd_input) return proc.returncode, out, err def run_command(session, args): cmd = json.loads(args.get('cmd')) if cmd and cmd[0] not in ALLOWED_CMDS: msg = _("Dom0 execution of '%s' is not permitted") % cmd[0] raise PluginError(msg) returncode, out, err = _run_command( cmd, json.loads(args.get('cmd_input', 'null'))) if not err: err = "" if not out: out = "" # This runs in Dom0, will return to neutron-ovs-agent in compute node result = {'returncode': returncode, 'out': out, 'err': err} return json.dumps(result) if __name__ == "__main__": XenAPIPlugin.dispatch({"run_command": run_command}) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/config_file.py0000664000175000017500000000224113160424533025625 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features import XenAPIPlugin def get_val(session, args): config_key = args['key'] config_file = open('/etc/xapi.conf') try: for line in config_file: split = line.split('=') if (len(split) == 2) and (split[0].strip() == config_key): return split[1].strip() return "" finally: config_file.close() if __name__ == '__main__': XenAPIPlugin.dispatch({"get_val": get_val}) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/glance.py0000664000175000017500000006252413160424533024624 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2012 OpenStack Foundation # Copyright (c) 2010 Citrix Systems, Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # TODO(sfinucan): Resolve all 'noqa' items once the above is no longer true """Handle the uploading and downloading of images via Glance.""" try: import httplib except ImportError: from six.moves import http_client as httplib try: import json except ImportError: import simplejson as json import md5 # noqa import socket import urllib2 from urlparse import urlparse import dom0_pluginlib import utils import XenAPI dom0_pluginlib.configure_logging('glance') logging = dom0_pluginlib.logging PluginError = dom0_pluginlib.PluginError SOCKET_TIMEOUT_SECONDS = 90 class RetryableError(Exception): pass def _create_connection(scheme, netloc): if scheme == 'https': conn = httplib.HTTPSConnection(netloc) else: conn = httplib.HTTPConnection(netloc) conn.connect() return conn def _download_tarball_and_verify(request, staging_path): # NOTE(johngarbutt) By default, there is no timeout. # To ensure the script does not hang if we lose connection # to glance, we add this socket timeout. # This is here so there is no chance the timeout out has # been adjusted by other library calls. socket.setdefaulttimeout(SOCKET_TIMEOUT_SECONDS) try: response = urllib2.urlopen(request) except urllib2.HTTPError, error: # noqa raise RetryableError(error) except urllib2.URLError, error: # noqa raise RetryableError(error) except httplib.HTTPException, error: # noqa # httplib.HTTPException and derivatives (BadStatusLine in particular) # don't have a useful __repr__ or __str__ raise RetryableError('%s: %s' % (error.__class__.__name__, error)) url = request.get_full_url() logging.info("Reading image data from %s" % url) callback_data = {'bytes_read': 0} checksum = md5.new() def update_md5(chunk): callback_data['bytes_read'] += len(chunk) checksum.update(chunk) try: try: utils.extract_tarball(response, staging_path, callback=update_md5) except Exception, error: # noqa raise RetryableError(error) finally: bytes_read = callback_data['bytes_read'] logging.info("Read %d bytes from %s", bytes_read, url) # Use ETag if available, otherwise content-md5(v2) or # X-Image-Meta-Checksum(v1) etag = response.info().getheader('etag', None) if etag is None: etag = response.info().getheader('content-md5', None) if etag is None: etag = response.info().getheader('x-image-meta-checksum', None) # Verify checksum using ETag checksum = checksum.hexdigest() if etag is None: msg = "No ETag found for comparison to checksum %(checksum)s" logging.info(msg % {'checksum': checksum}) elif checksum != etag: msg = 'ETag %(etag)s does not match computed md5sum %(checksum)s' raise RetryableError(msg % {'checksum': checksum, 'etag': etag}) else: msg = "Verified image checksum %(checksum)s" logging.info(msg % {'checksum': checksum}) def _download_tarball_v1(sr_path, staging_path, image_id, glance_host, glance_port, glance_use_ssl, extra_headers): # Download the tarball image from Glance v1 and extract it into the # staging area. Retry if there is any failure. if glance_use_ssl: scheme = 'https' else: scheme = 'http' endpoint = "%(scheme)s://%(glance_host)s:%(glance_port)d" % { 'scheme': scheme, 'glance_host': glance_host, 'glance_port': glance_port} _download_tarball_by_url_v1(sr_path, staging_path, image_id, endpoint, extra_headers) def _download_tarball_by_url_v1( sr_path, staging_path, image_id, glance_endpoint, extra_headers): # Download the tarball image from Glance v1 and extract it into the # staging area. url = "%(glance_endpoint)s/v1/images/%(image_id)s" % { 'glance_endpoint': glance_endpoint, 'image_id': image_id} logging.info("Downloading %s with glance v1 api" % url) request = urllib2.Request(url, headers=extra_headers) try: _download_tarball_and_verify(request, staging_path) except Exception: logging.exception('Failed to retrieve %(url)s' % {'url': url}) raise def _download_tarball_by_url_v2( sr_path, staging_path, image_id, glance_endpoint, extra_headers): # Download the tarball image from Glance v2 and extract it into the # staging area. url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': glance_endpoint, 'image_id': image_id} logging.debug("Downloading %s with glance v2 api" % url) request = urllib2.Request(url, headers=extra_headers) try: _download_tarball_and_verify(request, staging_path) except Exception: logging.exception('Failed to retrieve %(url)s' % {'url': url}) raise def _upload_tarball_v1(staging_path, image_id, glance_host, glance_port, glance_use_ssl, extra_headers, properties): if glance_use_ssl: scheme = 'https' else: scheme = 'http' url = '%s://%s:%s' % (scheme, glance_host, glance_port) _upload_tarball_by_url_v1(staging_path, image_id, url, extra_headers, properties) def _upload_tarball_by_url_v1(staging_path, image_id, glance_endpoint, extra_headers, properties): """Create a tarball of the image and then stream that into Glance v1 Using chunked-transfer-encoded HTTP. """ # NOTE(johngarbutt) By default, there is no timeout. # To ensure the script does not hang if we lose connection # to glance, we add this socket timeout. # This is here so there is no chance the timeout out has # been adjusted by other library calls. socket.setdefaulttimeout(SOCKET_TIMEOUT_SECONDS) logging.debug("Uploading image %s with glance v1 api" % image_id) url = "%(glance_endpoint)s/v1/images/%(image_id)s" % { 'glance_endpoint': glance_endpoint, 'image_id': image_id} logging.info("Writing image data to %s" % url) # NOTE(sdague): this is python 2.4, which means urlparse returns a # tuple, not a named tuple. # 0 - scheme # 1 - host:port (aka netloc) # 2 - path parts = urlparse(url) try: conn = _create_connection(parts[0], parts[1]) except Exception, error: # noqa logging.exception('Failed to connect %(url)s' % {'url': url}) raise RetryableError(error) try: validate_image_status_before_upload_v1(conn, url, extra_headers) try: # NOTE(sirp): httplib under python2.4 won't accept # a file-like object to request conn.putrequest('PUT', parts[2]) # NOTE(sirp): There is some confusion around OVF. Here's a summary # of where we currently stand: # 1. OVF as a container format is misnamed. We really should be # using OVA since that is the name for the container format; # OVF is the standard applied to the manifest file contained # within. # 2. We're currently uploading a vanilla tarball. In order to be # OVF/OVA compliant, we'll need to embed a minimal OVF # manifest as the first file. # NOTE(dprince): In order to preserve existing Glance properties # we set X-Glance-Registry-Purge-Props on this request. headers = { 'content-type': 'application/octet-stream', 'transfer-encoding': 'chunked', 'x-image-meta-is-public': 'False', 'x-image-meta-status': 'queued', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-glance-registry-purge-props': 'False'} headers.update(**extra_headers) for key, value in properties.items(): header_key = "x-image-meta-property-%s" % key.replace('_', '-') headers[header_key] = str(value) for header, value in headers.items(): conn.putheader(header, value) conn.endheaders() except Exception, error: # noqa logging.exception('Failed to upload %(url)s' % {'url': url}) raise RetryableError(error) callback_data = {'bytes_written': 0} def send_chunked_transfer_encoded(chunk): chunk_len = len(chunk) callback_data['bytes_written'] += chunk_len try: conn.send("%x\r\n%s\r\n" % (chunk_len, chunk)) except Exception, error: # noqa logging.exception('Failed to upload when sending chunks') raise RetryableError(error) compression_level = properties.get('xenapi_image_compression_level') utils.create_tarball( None, staging_path, callback=send_chunked_transfer_encoded, compression_level=compression_level) send_chunked_transfer_encoded('') # Chunked-Transfer terminator bytes_written = callback_data['bytes_written'] logging.info("Wrote %d bytes to %s" % (bytes_written, url)) resp = conn.getresponse() if resp.status == httplib.OK: return logging.error("Unexpected response while writing image data to %s: " "Response Status: %i, Response body: %s" % (url, resp.status, resp.read())) check_resp_status_and_retry(resp, image_id, url) finally: conn.close() def _update_image_meta_v2(conn, extra_headers, properties, patch_path): # NOTE(sirp): There is some confusion around OVF. Here's a summary # of where we currently stand: # 1. OVF as a container format is misnamed. We really should be # using OVA since that is the name for the container format; # OVF is the standard applied to the manifest file contained # within. # 2. We're currently uploading a vanilla tarball. In order to be # OVF/OVA compliant, we'll need to embed a minimal OVF # manifest as the first file. body = [ {"path": "/container_format", "value": "ovf", "op": "add"}, {"path": "/disk_format", "value": "vhd", "op": "add"}, {"path": "/visibility", "value": "private", "op": "add"}] headers = {'Content-Type': 'application/openstack-images-v2.1-json-patch'} headers.update(**extra_headers) for key, value in properties.items(): prop = {"path": "/%s" % key.replace('_', '-'), "value": str(value), "op": "add"} body.append(prop) body = json.dumps(body) conn.request('PATCH', patch_path, body=body, headers=headers) resp = conn.getresponse() resp.read() if resp.status == httplib.OK: return logging.error("Image meta was not updated. Status: %s, Reason: %s" % ( resp.status, resp.reason)) def _upload_tarball_by_url_v2(staging_path, image_id, glance_endpoint, extra_headers, properties): """Create a tarball of the image and then stream that into Glance v2 Using chunked-transfer-encoded HTTP. """ # NOTE(johngarbutt) By default, there is no timeout. # To ensure the script does not hang if we lose connection # to glance, we add this socket timeout. # This is here so there is no chance the timeout out has # been adjusted by other library calls. socket.setdefaulttimeout(SOCKET_TIMEOUT_SECONDS) logging.debug("Uploading imaged %s with glance v2 api" % image_id) url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': glance_endpoint, 'image_id': image_id} # NOTE(sdague): this is python 2.4, which means urlparse returns a # tuple, not a named tuple. # 0 - scheme # 1 - host:port (aka netloc) # 2 - path parts = urlparse(url) try: conn = _create_connection(parts[0], parts[1]) except Exception, error: # noqa raise RetryableError(error) try: mgt_url = "%(glance_endpoint)s/v2/images/%(image_id)s" % { 'glance_endpoint': glance_endpoint, 'image_id': image_id} mgt_parts = urlparse(mgt_url) mgt_path = mgt_parts[2] _update_image_meta_v2(conn, extra_headers, properties, mgt_path) validate_image_status_before_upload_v2(conn, url, extra_headers, mgt_path) try: conn.connect() # NOTE(sirp): httplib under python2.4 won't accept # a file-like object to request conn.putrequest('PUT', parts[2]) headers = { 'content-type': 'application/octet-stream', 'transfer-encoding': 'chunked'} headers.update(**extra_headers) for header, value in headers.items(): conn.putheader(header, value) conn.endheaders() except Exception, error: # noqa logging.exception('Failed to upload %(url)s' % {'url': url}) raise RetryableError(error) callback_data = {'bytes_written': 0} def send_chunked_transfer_encoded(chunk): chunk_len = len(chunk) callback_data['bytes_written'] += chunk_len try: conn.send("%x\r\n%s\r\n" % (chunk_len, chunk)) except Exception, error: # noqa logging.exception('Failed to upload when sending chunks') raise RetryableError(error) compression_level = properties.get('xenapi_image_compression_level') utils.create_tarball( None, staging_path, callback=send_chunked_transfer_encoded, compression_level=compression_level) send_chunked_transfer_encoded('') # Chunked-Transfer terminator bytes_written = callback_data['bytes_written'] logging.info("Wrote %d bytes to %s" % (bytes_written, url)) resp = conn.getresponse() if resp.status == httplib.NO_CONTENT: return logging.error("Unexpected response while writing image data to %s: " "Response Status: %i, Response body: %s" % (url, resp.status, resp.read())) check_resp_status_and_retry(resp, image_id, url) finally: conn.close() def check_resp_status_and_retry(resp, image_id, url): # Note(Jesse): This branch sorts errors into those that are permanent, # those that are ephemeral, and those that are unexpected. if resp.status in (httplib.BAD_REQUEST, # 400 httplib.UNAUTHORIZED, # 401 httplib.PAYMENT_REQUIRED, # 402 httplib.FORBIDDEN, # 403 httplib.METHOD_NOT_ALLOWED, # 405 httplib.NOT_ACCEPTABLE, # 406 httplib.PROXY_AUTHENTICATION_REQUIRED, # 407 httplib.CONFLICT, # 409 httplib.GONE, # 410 httplib.LENGTH_REQUIRED, # 411 httplib.PRECONDITION_FAILED, # 412 httplib.REQUEST_ENTITY_TOO_LARGE, # 413 httplib.REQUEST_URI_TOO_LONG, # 414 httplib.UNSUPPORTED_MEDIA_TYPE, # 415 httplib.REQUESTED_RANGE_NOT_SATISFIABLE, # 416 httplib.EXPECTATION_FAILED, # 417 httplib.UNPROCESSABLE_ENTITY, # 422 httplib.LOCKED, # 423 httplib.FAILED_DEPENDENCY, # 424 httplib.UPGRADE_REQUIRED, # 426 httplib.NOT_IMPLEMENTED, # 501 httplib.HTTP_VERSION_NOT_SUPPORTED, # 505 httplib.NOT_EXTENDED, # 510 ): raise PluginError("Got Permanent Error response [%i] while " "uploading image [%s] to glance [%s]" % (resp.status, image_id, url)) # Nova service would process the exception elif resp.status == httplib.NOT_FOUND: # 404 exc = XenAPI.Failure('ImageNotFound') raise exc # NOTE(nikhil): Only a sub-set of the 500 errors are retryable. We # optimistically retry on 500 errors below. elif resp.status in (httplib.REQUEST_TIMEOUT, # 408 httplib.INTERNAL_SERVER_ERROR, # 500 httplib.BAD_GATEWAY, # 502 httplib.SERVICE_UNAVAILABLE, # 503 httplib.GATEWAY_TIMEOUT, # 504 httplib.INSUFFICIENT_STORAGE, # 507 ): raise RetryableError("Got Ephemeral Error response [%i] while " "uploading image [%s] to glance [%s]" % (resp.status, image_id, url)) else: # Note(Jesse): Assume unexpected errors are retryable. If you are # seeing this error message, the error should probably be added # to either the ephemeral or permanent error list. raise RetryableError("Got Unexpected Error response [%i] while " "uploading image [%s] to glance [%s]" % (resp.status, image_id, url)) def validate_image_status_before_upload_v1(conn, url, extra_headers): try: parts = urlparse(url) path = parts[2] image_id = path.split('/')[-1] # NOTE(nikhil): Attempt to determine if the Image has a status # of 'queued'. Because data will continued to be sent to Glance # until it has a chance to check the Image state, discover that # it is not 'active' and send back a 409. Hence, the data will be # unnecessarily buffered by Glance. This wastes time and bandwidth. # LP bug #1202785 conn.request('HEAD', path, headers=extra_headers) head_resp = conn.getresponse() # NOTE(nikhil): read the response to re-use the conn object. body_data = head_resp.read(8192) if len(body_data) > 8: err_msg = ('Cannot upload data for image %(image_id)s as the ' 'HEAD call had more than 8192 bytes of data in ' 'the response body.' % {'image_id': image_id}) raise PluginError("Got Permanent Error while uploading image " "[%s] to glance [%s]. " "Message: %s" % (image_id, url, err_msg)) else: head_resp.read() except Exception, error: # noqa logging.exception('Failed to HEAD the image %(image_id)s while ' 'checking image status before attempting to ' 'upload %(url)s' % {'image_id': image_id, 'url': url}) raise RetryableError(error) if head_resp.status != httplib.OK: logging.error("Unexpected response while doing a HEAD call " "to image %s , url = %s , Response Status: " "%i" % (image_id, url, head_resp.status)) check_resp_status_and_retry(head_resp, image_id, url) else: image_status = head_resp.getheader('x-image-meta-status') if image_status not in ('queued', ): err_msg = ('Cannot upload data for image %(image_id)s as the ' 'image status is %(image_status)s' % {'image_id': image_id, 'image_status': image_status}) logging.exception(err_msg) raise PluginError("Got Permanent Error while uploading image " "[%s] to glance [%s]. " "Message: %s" % (image_id, url, err_msg)) else: logging.info('Found image %(image_id)s in status ' '%(image_status)s. Attempting to ' 'upload.' % {'image_id': image_id, 'image_status': image_status}) def validate_image_status_before_upload_v2(conn, url, extra_headers, get_path): try: parts = urlparse(url) path = parts[2] image_id = path.split('/')[-2] # NOTE(nikhil): Attempt to determine if the Image has a status # of 'queued'. Because data will continued to be sent to Glance # until it has a chance to check the Image state, discover that # it is not 'active' and send back a 409. Hence, the data will be # unnecessarily buffered by Glance. This wastes time and bandwidth. # LP bug #1202785 conn.request('GET', get_path, headers=extra_headers) get_resp = conn.getresponse() except Exception, error: # noqa logging.exception('Failed to GET the image %(image_id)s while ' 'checking image status before attempting to ' 'upload %(url)s' % {'image_id': image_id, 'url': url}) raise RetryableError(error) if get_resp.status != httplib.OK: logging.error("Unexpected response while doing a GET call " "to image %s , url = %s , Response Status: " "%i" % (image_id, url, get_resp.status)) check_resp_status_and_retry(get_resp, image_id, url) else: body = json.loads(get_resp.read()) image_status = body['status'] if image_status not in ('queued', ): err_msg = ('Cannot upload data for image %(image_id)s as the ' 'image status is %(image_status)s' % {'image_id': image_id, 'image_status': image_status}) logging.exception(err_msg) raise PluginError("Got Permanent Error while uploading image " "[%s] to glance [%s]. " "Message: %s" % (image_id, url, err_msg)) else: logging.info('Found image %(image_id)s in status ' '%(image_status)s. Attempting to ' 'upload.' % {'image_id': image_id, 'image_status': image_status}) get_resp.read() def download_vhd2(session, image_id, endpoint, uuid_stack, sr_path, extra_headers, api_version=1): # Download an image from Glance v2, unbundle it, and then deposit the # VHDs into the storage repository. staging_path = utils.make_staging_area(sr_path) try: # Download tarball into staging area and extract it # TODO(mfedosin): remove this check when v1 is deprecated. if api_version == 1: _download_tarball_by_url_v1( sr_path, staging_path, image_id, endpoint, extra_headers) else: _download_tarball_by_url_v2( sr_path, staging_path, image_id, endpoint, extra_headers) # Move the VHDs from the staging area into the storage repository return utils.import_vhds(sr_path, staging_path, uuid_stack) finally: utils.cleanup_staging_area(staging_path) def upload_vhd2(session, vdi_uuids, image_id, endpoint, sr_path, extra_headers, properties, api_version=1): """Bundle the VHDs comprising an image and then stream them into Glance""" staging_path = utils.make_staging_area(sr_path) try: utils.prepare_staging_area(sr_path, staging_path, vdi_uuids) # TODO(mfedosin): remove this check when v1 is deprecated. if api_version == 1: _upload_tarball_by_url_v1(staging_path, image_id, endpoint, extra_headers, properties) else: _upload_tarball_by_url_v2(staging_path, image_id, endpoint, extra_headers, properties) finally: utils.cleanup_staging_area(staging_path) if __name__ == '__main__': utils.register_plugin_calls(download_vhd2, upload_vhd2) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/agent.py0000664000175000017500000002324113160424533024462 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2011 Citrix Systems, Inc. # Copyright 2011 OpenStack Foundation # Copyright 2011 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # TODO(sfinucan): Resolve all 'noqa' items once the above is no longer true # TODO(sfinucan): Remove the symlinks in this folder once Ocata is released # # XenAPI plugin for reading/writing information to xenstore # import base64 import commands # noqa try: import json except ImportError: import simplejson as json import time import XenAPIPlugin import dom0_pluginlib dom0_pluginlib.configure_logging("agent") import xenstore DEFAULT_TIMEOUT = 30 PluginError = dom0_pluginlib.PluginError class TimeoutError(StandardError): pass class RebootDetectedError(StandardError): pass def version(self, arg_dict): """Get version of agent.""" timeout = int(arg_dict.pop('timeout', DEFAULT_TIMEOUT)) arg_dict["value"] = json.dumps({"name": "version", "value": "agent"}) request_id = arg_dict["id"] arg_dict["path"] = "data/host/%s" % request_id xenstore.write_record(self, arg_dict) try: resp = _wait_for_agent(self, request_id, arg_dict, timeout) except TimeoutError, e: # noqa raise PluginError(e) return resp def key_init(self, arg_dict): """Handles the Diffie-Hellman key exchange with the agent to establish the shared secret key used to encrypt/decrypt sensitive info to be passed, such as passwords. Returns the shared secret key value. """ timeout = int(arg_dict.pop('timeout', DEFAULT_TIMEOUT)) # WARNING: Some older Windows agents will crash if the public key isn't # a string pub = arg_dict["pub"] arg_dict["value"] = json.dumps({"name": "keyinit", "value": pub}) request_id = arg_dict["id"] arg_dict["path"] = "data/host/%s" % request_id xenstore.write_record(self, arg_dict) try: resp = _wait_for_agent(self, request_id, arg_dict, timeout) except TimeoutError, e: # noqa raise PluginError(e) return resp def password(self, arg_dict): """Writes a request to xenstore that tells the agent to set the root password for the given VM. The password should be encrypted using the shared secret key that was returned by a previous call to key_init. The encrypted password value should be passed as the value for the 'enc_pass' key in arg_dict. """ timeout = int(arg_dict.pop('timeout', DEFAULT_TIMEOUT)) enc_pass = arg_dict["enc_pass"] arg_dict["value"] = json.dumps({"name": "password", "value": enc_pass}) request_id = arg_dict["id"] arg_dict["path"] = "data/host/%s" % request_id xenstore.write_record(self, arg_dict) try: resp = _wait_for_agent(self, request_id, arg_dict, timeout) except TimeoutError, e: # noqa raise PluginError(e) return resp def resetnetwork(self, arg_dict): """Writes a request to xenstore that tells the agent to reset networking. """ timeout = int(arg_dict.pop('timeout', DEFAULT_TIMEOUT)) arg_dict['value'] = json.dumps({'name': 'resetnetwork', 'value': ''}) request_id = arg_dict['id'] arg_dict['path'] = "data/host/%s" % request_id xenstore.write_record(self, arg_dict) try: resp = _wait_for_agent(self, request_id, arg_dict, timeout) except TimeoutError, e: # noqa raise PluginError(e) return resp def inject_file(self, arg_dict): """Expects a file path and the contents of the file to be written. Should be base64-encoded in order to eliminate errors as they are passed through the stack. Writes that information to xenstore for the agent, which will decode the file and intended path, and create it on the instance. The original agent munged both of these into a single entry; the new agent keeps them separate. We will need to test for the new agent, and write the xenstore records to match the agent version. We will also need to test to determine if the file injection method on the agent has been disabled, and raise a NotImplemented error if that is the case. """ timeout = int(arg_dict.pop('timeout', DEFAULT_TIMEOUT)) b64_path = arg_dict["b64_path"] b64_file = arg_dict["b64_contents"] request_id = arg_dict["id"] agent_features = _get_agent_features(self, arg_dict) if "file_inject" in agent_features: # New version of the agent. Agent should receive a 'value' # key whose value is a dictionary containing 'b64_path' and # 'b64_file'. See old version below. arg_dict["value"] = json.dumps({"name": "file_inject", "value": {"b64_path": b64_path, "b64_file": b64_file}}) elif "injectfile" in agent_features: # Old agent requires file path and file contents to be # combined into one base64 value. raw_path = base64.b64decode(b64_path) raw_file = base64.b64decode(b64_file) new_b64 = base64.b64encode("%s,%s" % (raw_path, raw_file)) arg_dict["value"] = json.dumps({"name": "injectfile", "value": new_b64}) else: # Either the methods don't exist in the agent, or they # have been disabled. raise NotImplementedError("NOT IMPLEMENTED: Agent does not" " support file injection.") arg_dict["path"] = "data/host/%s" % request_id xenstore.write_record(self, arg_dict) try: resp = _wait_for_agent(self, request_id, arg_dict, timeout) except TimeoutError, e: # noqa raise PluginError(e) return resp def agent_update(self, arg_dict): """Expects an URL and md5sum of the contents Then directs the agent to update itself. """ timeout = int(arg_dict.pop('timeout', DEFAULT_TIMEOUT)) request_id = arg_dict["id"] url = arg_dict["url"] md5sum = arg_dict["md5sum"] arg_dict["value"] = json.dumps({"name": "agentupdate", "value": "%s,%s" % (url, md5sum)}) arg_dict["path"] = "data/host/%s" % request_id xenstore.write_record(self, arg_dict) try: resp = _wait_for_agent(self, request_id, arg_dict, timeout) except TimeoutError, e: # noqa raise PluginError(e) return resp def _get_agent_features(self, arg_dict): """Return an array of features that an agent supports.""" timeout = int(arg_dict.pop('timeout', DEFAULT_TIMEOUT)) tmp_id = commands.getoutput("uuidgen") dct = {} dct.update(arg_dict) dct["value"] = json.dumps({"name": "features", "value": ""}) dct["path"] = "data/host/%s" % tmp_id xenstore.write_record(self, dct) try: resp = _wait_for_agent(self, tmp_id, dct, timeout) except TimeoutError, e: # noqa raise PluginError(e) response = json.loads(resp) if response['returncode'] != 0: return response["message"].split(",") else: return {} def _wait_for_agent(self, request_id, arg_dict, timeout): """Periodically checks xenstore for a response from the agent. The request is always written to 'data/host/{id}', and the agent's response for that request will be in 'data/guest/{id}'. If no value appears from the agent within the timeout specified, the original request is deleted and a TimeoutError is raised. """ arg_dict["path"] = "data/guest/%s" % request_id arg_dict["ignore_missing_path"] = True start = time.time() reboot_detected = False while time.time() - start < timeout: ret = xenstore.read_record(self, arg_dict) # Note: the response for None with be a string that includes # double quotes. if ret != '"None"': # The agent responded return ret time.sleep(.5) # NOTE(johngarbutt) If we can't find this domid, then # the VM has rebooted, so we must trigger domid refresh. # Check after the sleep to give xenstore time to update # after the VM reboot. exists_args = { "dom_id": arg_dict["dom_id"], "path": "name", } dom_id_is_present = xenstore.record_exists(exists_args) if not dom_id_is_present: reboot_detected = True break # No response within the timeout period; bail out # First, delete the request record arg_dict["path"] = "data/host/%s" % request_id xenstore.delete_record(self, arg_dict) if reboot_detected: raise RebootDetectedError("REBOOT: dom_id %s no longer " "present" % arg_dict["dom_id"]) else: raise TimeoutError("TIMEOUT: No response from agent within" " %s seconds." % timeout) if __name__ == "__main__": XenAPIPlugin.dispatch( {"version": version, "key_init": key_init, "password": password, "resetnetwork": resetnetwork, "inject_file": inject_file, "agentupdate": agent_update}) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/dom0_pluginlib.py0000664000175000017500000001201413160424533026264 0ustar jenkinsjenkins00000000000000# Copyright (c) 2010 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # # Helper functions for the Nova xapi plugins. In time, this will merge # with the pluginlib.py shipped with xapi, but for now, that file is not # very stable, so it's easiest just to have a copy of all the functions # that we need. # import logging import logging.handlers import os import time import XenAPI # global variable definition MAX_VBD_UNPLUG_RETRIES = 30 # Logging setup def configure_logging(name): log = logging.getLogger() log.setLevel(logging.DEBUG) if os.path.exists('/dev/log'): sysh = logging.handlers.SysLogHandler('/dev/log') sysh.setLevel(logging.DEBUG) formatter = logging.Formatter( '%s: %%(levelname)-8s%%(message)s' % name) sysh.setFormatter(formatter) log.addHandler(sysh) # Exceptions class PluginError(Exception): """Base Exception class for all plugin errors.""" def __init__(self, *args): Exception.__init__(self, *args) class ArgumentError(PluginError): # Raised when required arguments are missing, argument values are invalid, # or incompatible arguments are given. def __init__(self, *args): PluginError.__init__(self, *args) # Argument validation def exists(args, key): # Validates that a freeform string argument to a RPC method call is given. # Returns the string. if key in args: return args[key] else: raise ArgumentError('Argument %s is required.' % key) def optional(args, key): # If the given key is in args, return the corresponding value, otherwise # return None return key in args and args[key] or None def _get_domain_0(session): this_host_ref = session.xenapi.session.get_this_host(session.handle) expr = 'field "is_control_domain" = "true" and field "resident_on" = "%s"' expr = expr % this_host_ref return list(session.xenapi.VM.get_all_records_where(expr).keys())[0] def with_vdi_in_dom0(session, vdi, read_only, f): dom0 = _get_domain_0(session) vbd_rec = {} vbd_rec['VM'] = dom0 vbd_rec['VDI'] = vdi vbd_rec['userdevice'] = 'autodetect' vbd_rec['bootable'] = False vbd_rec['mode'] = read_only and 'RO' or 'RW' vbd_rec['type'] = 'disk' vbd_rec['unpluggable'] = True vbd_rec['empty'] = False vbd_rec['other_config'] = {} vbd_rec['qos_algorithm_type'] = '' vbd_rec['qos_algorithm_params'] = {} vbd_rec['qos_supported_algorithms'] = [] logging.debug('Creating VBD for VDI %s ... ', vdi) vbd = session.xenapi.VBD.create(vbd_rec) logging.debug('Creating VBD for VDI %s done.', vdi) try: logging.debug('Plugging VBD %s ... ', vbd) session.xenapi.VBD.plug(vbd) logging.debug('Plugging VBD %s done.', vbd) return f(session.xenapi.VBD.get_device(vbd)) finally: logging.debug('Destroying VBD for VDI %s ... ', vdi) _vbd_unplug_with_retry(session, vbd) try: session.xenapi.VBD.destroy(vbd) except XenAPI.Failure, e: # noqa logging.error('Ignoring XenAPI.Failure %s', e) logging.debug('Destroying VBD for VDI %s done.', vdi) def _vbd_unplug_with_retry(session, vbd): """Call VBD.unplug on the given VBD with a retry if we get DEVICE_DETACH_REJECTED. For reasons which I don't understand, we're seeing the device still in use, even when all processes using the device should be dead. """ retry_count = MAX_VBD_UNPLUG_RETRIES while True: try: session.xenapi.VBD.unplug(vbd) logging.debug('VBD.unplug successful first time.') return except XenAPI.Failure, e: # noqa if (len(e.details) > 0 and e.details[0] == 'DEVICE_DETACH_REJECTED'): retry_count -= 1 if (retry_count <= 0): raise PluginError('VBD.unplug failed after retry %s times.' % MAX_VBD_UNPLUG_RETRIES) logging.debug('VBD.unplug rejected: retrying...') time.sleep(1) elif (len(e.details) > 0 and e.details[0] == 'DEVICE_ALREADY_DETACHED'): logging.debug('VBD.unplug successful eventually.') return else: logging.error('Ignoring XenAPI.Failure in VBD.unplug: %s', e) return os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/utils.py0000664000175000017500000004122613160424533024527 0ustar jenkinsjenkins00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features """Various utilities used by XenServer plugins.""" try: import cPickle as pickle except ImportError: import pickle import errno import logging import os import shutil import signal import subprocess import tempfile import XenAPIPlugin LOG = logging.getLogger(__name__) CHUNK_SIZE = 8192 class CommandNotFound(Exception): pass def delete_if_exists(path): try: os.unlink(path) except OSError, e: # noqa if e.errno == errno.ENOENT: LOG.warning("'%s' was already deleted, skipping delete", path) else: raise def _link(src, dst): LOG.info("Hard-linking file '%s' -> '%s'", src, dst) os.link(src, dst) def _rename(src, dst): LOG.info("Renaming file '%s' -> '%s'", src, dst) try: os.rename(src, dst) except OSError, e: # noqa if e.errno == errno.EXDEV: LOG.error("Invalid cross-device link. Perhaps %s and %s should " "be symlinked on the same filesystem?", src, dst) raise def make_subprocess(cmdline, stdout=False, stderr=False, stdin=False, universal_newlines=False, close_fds=True, env=None): """Make a subprocess according to the given command-line string""" LOG.info("Running cmd '%s'", " ".join(cmdline)) kwargs = {} kwargs['stdout'] = stdout and subprocess.PIPE or None kwargs['stderr'] = stderr and subprocess.PIPE or None kwargs['stdin'] = stdin and subprocess.PIPE or None kwargs['universal_newlines'] = universal_newlines kwargs['close_fds'] = close_fds kwargs['env'] = env try: proc = subprocess.Popen(cmdline, **kwargs) except OSError, e: # noqa if e.errno == errno.ENOENT: raise CommandNotFound else: raise return proc class SubprocessException(Exception): def __init__(self, cmdline, ret, out, err): Exception.__init__(self, "'%s' returned non-zero exit code: " "retcode=%i, out='%s', stderr='%s'" % (cmdline, ret, out, err)) self.cmdline = cmdline self.ret = ret self.out = out self.err = err def finish_subprocess(proc, cmdline, cmd_input=None, ok_exit_codes=None): """Ensure that the process returned a zero exit code indicating success""" if ok_exit_codes is None: ok_exit_codes = [0] out, err = proc.communicate(cmd_input) ret = proc.returncode if ret not in ok_exit_codes: LOG.error("Command '%(cmdline)s' with process id '%(pid)s' expected " "return code in '%(ok)s' but got '%(rc)s': %(err)s", {'cmdline': cmdline, 'pid': proc.pid, 'ok': ok_exit_codes, 'rc': ret, 'err': err}) raise SubprocessException(' '.join(cmdline), ret, out, err) return out def run_command(cmd, cmd_input=None, ok_exit_codes=None): """Abstracts out the basics of issuing system commands. If the command returns anything in stderr, an exception is raised with that information. Otherwise, the output from stdout is returned. cmd_input is passed to the process on standard input. """ proc = make_subprocess(cmd, stdout=True, stderr=True, stdin=True, close_fds=True) return finish_subprocess(proc, cmd, cmd_input=cmd_input, ok_exit_codes=ok_exit_codes) def try_kill_process(proc): """Sends the given process the SIGKILL signal.""" pid = proc.pid LOG.info("Killing process %s", pid) try: os.kill(pid, signal.SIGKILL) except Exception: LOG.exception("Failed to kill %s", pid) def make_staging_area(sr_path): """The staging area is a place we temporarily store and manipulate VHDs. The use of the staging area is different for upload and download: Download ======== When we download the tarball, the VHDs contained within will have names like "snap.vhd" and "image.vhd". We need to assign UUIDs to them before moving them into the SR. However, since 'image.vhd' may be a base_copy, we need to link it to 'snap.vhd' (using vhd-util modify) before moving both into the SR (otherwise the SR.scan will cause 'image.vhd' to be deleted). The staging area gives us a place to perform these operations before they are moved to the SR, scanned, and then registered with XenServer. Upload ====== On upload, we want to rename the VHDs to reflect what they are, 'snap.vhd' in the case of the snapshot VHD, and 'image.vhd' in the case of the base_copy. The staging area provides a directory in which we can create hard-links to rename the VHDs without affecting what's in the SR. NOTE ==== The staging area is created as a subdirectory within the SR in order to guarantee that it resides within the same filesystem and therefore permit hard-linking and cheap file moves. """ staging_path = tempfile.mkdtemp(dir=sr_path) return staging_path def cleanup_staging_area(staging_path): """Remove staging area directory On upload, the staging area contains hard-links to the VHDs in the SR; it's safe to remove the staging-area because the SR will keep the link count > 0 (so the VHDs in the SR will not be deleted). """ if os.path.exists(staging_path): shutil.rmtree(staging_path) def _handle_old_style_images(staging_path): """Rename files to conform to new image format, if needed. Old-Style: snap.vhd -> image.vhd -> base.vhd New-Style: 0.vhd -> 1.vhd -> ... (n-1).vhd The New-Style format has the benefit of being able to support a VDI chain of arbitrary length. """ file_num = 0 for filename in ('snap.vhd', 'image.vhd', 'base.vhd'): path = os.path.join(staging_path, filename) if os.path.exists(path): _rename(path, os.path.join(staging_path, "%d.vhd" % file_num)) file_num += 1 # Rename any format of name to 0.vhd when there is only single one contents = os.listdir(staging_path) if len(contents) == 1: filename = contents[0] if filename != '0.vhd' and filename.endswith('.vhd'): _rename( os.path.join(staging_path, filename), os.path.join(staging_path, '0.vhd')) def _assert_vhd_not_hidden(path): """Sanity check to ensure that only appropriate VHDs are marked as hidden. If this flag is incorrectly set, then when we move the VHD into the SR, it will be deleted out from under us. """ query_cmd = ["vhd-util", "query", "-n", path, "-f"] out = run_command(query_cmd) for line in out.splitlines(): if line.lower().startswith('hidden'): value = line.split(':')[1].strip() if value == "1": raise Exception( "VHD %s is marked as hidden without child" % path) def _vhd_util_check(vdi_path): check_cmd = ["vhd-util", "check", "-n", vdi_path, "-p"] out = run_command(check_cmd, ok_exit_codes=[0, 22]) first_line = out.splitlines()[0].strip() return out, first_line def _validate_vhd(vdi_path): """This checks for several errors in the VHD structure. Most notably, it checks that the timestamp in the footer is correct, but may pick up other errors also. This check ensures that the timestamps listed in the VHD footer aren't in the future. This can occur during a migration if the clocks on the two Dom0's are out-of-sync. This would corrupt the SR if it were imported, so generate an exception to bail. """ out, first_line = _vhd_util_check(vdi_path) if 'invalid' in first_line: LOG.warning("VHD invalid, attempting repair.") repair_cmd = ["vhd-util", "repair", "-n", vdi_path] run_command(repair_cmd) out, first_line = _vhd_util_check(vdi_path) if 'invalid' in first_line: if 'footer' in first_line: part = 'footer' elif 'header' in first_line: part = 'header' else: part = 'setting' details = first_line.split(':', 1) if len(details) == 2: details = details[1] else: details = first_line extra = '' if 'timestamp' in first_line: extra = (" ensure source and destination host machines have " "time set correctly") LOG.info("VDI Error details: %s", out) raise Exception( "VDI '%(vdi_path)s' has an invalid %(part)s: '%(details)s'" "%(extra)s" % {'vdi_path': vdi_path, 'part': part, 'details': details, 'extra': extra}) LOG.info("VDI is valid: %s", vdi_path) def _validate_vdi_chain(vdi_path): """Check VDI chain This check ensures that the parent pointers on the VHDs are valid before we move the VDI chain to the SR. This is *very* important because a bad parent pointer will corrupt the SR causing a cascade of failures. """ def get_parent_path(path): query_cmd = ["vhd-util", "query", "-n", path, "-p"] out = run_command(query_cmd, ok_exit_codes=[0, 22]) first_line = out.splitlines()[0].strip() if first_line.endswith(".vhd"): return first_line elif 'has no parent' in first_line: return None elif 'query failed' in first_line: raise Exception("VDI '%s' not present which breaks" " the VDI chain, bailing out" % path) else: raise Exception("Unexpected output '%s' from vhd-util" % out) cur_path = vdi_path while cur_path: _validate_vhd(cur_path) cur_path = get_parent_path(cur_path) def _validate_sequenced_vhds(staging_path): # This check ensures that the VHDs in the staging area are sequenced # properly from 0 to n-1 with no gaps. seq_num = 0 filenames = os.listdir(staging_path) for filename in filenames: if not filename.endswith('.vhd'): continue # Ignore legacy swap embedded in the image, generated on-the-fly now if filename == "swap.vhd": continue vhd_path = os.path.join(staging_path, "%d.vhd" % seq_num) if not os.path.exists(vhd_path): raise Exception("Corrupt image. Expected seq number: %d. Files: %s" % (seq_num, filenames)) seq_num += 1 def import_vhds(sr_path, staging_path, uuid_stack): """Move VHDs from staging area into the SR. The staging area is necessary because we need to perform some fixups (assigning UUIDs, relinking the VHD chain) before moving into the SR, otherwise the SR manager process could potentially delete the VHDs out from under us. Returns: A dict of imported VHDs: {'root': {'uuid': 'ffff-aaaa'}} """ _handle_old_style_images(staging_path) _validate_sequenced_vhds(staging_path) files_to_move = [] # Collect sequenced VHDs and assign UUIDs to them seq_num = 0 while True: orig_vhd_path = os.path.join(staging_path, "%d.vhd" % seq_num) if not os.path.exists(orig_vhd_path): break # Rename (0, 1 .. N).vhd -> aaaa-bbbb-cccc-dddd.vhd vhd_uuid = uuid_stack.pop() vhd_path = os.path.join(staging_path, "%s.vhd" % vhd_uuid) _rename(orig_vhd_path, vhd_path) if seq_num == 0: leaf_vhd_path = vhd_path leaf_vhd_uuid = vhd_uuid files_to_move.append(vhd_path) seq_num += 1 # Re-link VHDs, in reverse order, from base-copy -> leaf parent_path = None for vhd_path in reversed(files_to_move): if parent_path: # Link to parent modify_cmd = ["vhd-util", "modify", "-n", vhd_path, "-p", parent_path] run_command(modify_cmd) parent_path = vhd_path # Sanity check the leaf VHD _assert_vhd_not_hidden(leaf_vhd_path) _validate_vdi_chain(leaf_vhd_path) # Move files into SR for orig_path in files_to_move: new_path = os.path.join(sr_path, os.path.basename(orig_path)) _rename(orig_path, new_path) imported_vhds = dict(root=dict(uuid=leaf_vhd_uuid)) return imported_vhds def prepare_staging_area(sr_path, staging_path, vdi_uuids, seq_num=0): """Hard-link VHDs into staging area.""" for vdi_uuid in vdi_uuids: source = os.path.join(sr_path, "%s.vhd" % vdi_uuid) link_name = os.path.join(staging_path, "%d.vhd" % seq_num) _link(source, link_name) seq_num += 1 def create_tarball(fileobj, path, callback=None, compression_level=None): """Create a tarball from a given path. :param fileobj: a file-like object holding the tarball byte-stream. If None, then only the callback will be used. :param path: path to create tarball from :param callback: optional callback to call on each chunk written :param compression_level: compression level, e.g., 9 for gzip -9. """ tar_cmd = ["tar", "-zc", "--directory=%s" % path, "."] env = os.environ.copy() if compression_level and 1 <= compression_level <= 9: env["GZIP"] = "-%d" % compression_level tar_proc = make_subprocess(tar_cmd, stdout=True, stderr=True, env=env) try: while True: chunk = tar_proc.stdout.read(CHUNK_SIZE) if chunk == '': break if callback: callback(chunk) if fileobj: fileobj.write(chunk) except Exception: try_kill_process(tar_proc) raise finish_subprocess(tar_proc, tar_cmd) def extract_tarball(fileobj, path, callback=None): """Extract a tarball to a given path. :param fileobj: a file-like object holding the tarball byte-stream :param path: path to extract tarball into :param callback: optional callback to call on each chunk read """ tar_cmd = ["tar", "-zx", "--directory=%s" % path] tar_proc = make_subprocess(tar_cmd, stderr=True, stdin=True) try: while True: chunk = fileobj.read(CHUNK_SIZE) if chunk == '': break if callback: callback(chunk) tar_proc.stdin.write(chunk) # NOTE(tpownall): If we do not poll for the tar process exit # code when tar has exited pre maturely there is the chance # that tar will become a defunct zombie child under glance plugin # and re parented under init forever waiting on the stdin pipe to # close. Polling for the exit code allows us to break the pipe. returncode = tar_proc.poll() tar_pid = tar_proc.pid if returncode is not None: LOG.error("tar extract with process id '%(pid)s' " "exited early with '%(rc)s'", {'pid': tar_pid, 'rc': returncode}) raise SubprocessException( ' '.join(tar_cmd), returncode, "", "") except SubprocessException: # no need to kill already dead process raise except Exception: LOG.exception("Failed while sending data to tar pid: %s", tar_pid) try_kill_process(tar_proc) raise finish_subprocess(tar_proc, tar_cmd) def make_dev_path(dev, partition=None, base='/dev'): """Return a path to a particular device. >>> make_dev_path('xvdc') /dev/xvdc >>> make_dev_path('xvdc', 1) /dev/xvdc1 """ path = os.path.join(base, dev) if partition: path += str(partition) return path def _handle_serialization(func): def wrapped(session, params): params = pickle.loads(params['params']) rv = func(session, *params['args'], **params['kwargs']) return pickle.dumps(rv) return wrapped def register_plugin_calls(*funcs): """Wrapper around XenAPIPlugin.dispatch which handles pickle serialization. """ wrapped_dict = {} for func in funcs: wrapped_dict[func.__name__] = _handle_serialization(func) XenAPIPlugin.dispatch(wrapped_dict) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/kernel.py0000664000175000017500000001100513160424533024637 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2012 OpenStack Foundation # Copyright (c) 2010 Citrix Systems, Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # TODO(sfinucan): Resolve all 'noqa' items once the above is no longer true """Handle the manipulation of kernel images.""" import errno import os import shutil import XenAPIPlugin import dom0_pluginlib dom0_pluginlib.configure_logging('kernel') logging = dom0_pluginlib.logging exists = dom0_pluginlib.exists optional = dom0_pluginlib.optional with_vdi_in_dom0 = dom0_pluginlib.with_vdi_in_dom0 KERNEL_DIR = '/boot/guest' def _copy_vdi(dest, copy_args): vdi_uuid = copy_args['vdi_uuid'] vdi_size = copy_args['vdi_size'] cached_image = copy_args['cached-image'] logging.debug("copying kernel/ramdisk file from %s to /boot/guest/%s", dest, vdi_uuid) filename = KERNEL_DIR + '/' + vdi_uuid # Make sure KERNEL_DIR exists, otherwise create it if not os.path.isdir(KERNEL_DIR): logging.debug("Creating directory %s", KERNEL_DIR) os.makedirs(KERNEL_DIR) # Read data from /dev/ and write into a file on /boot/guest of = open(filename, 'wb') f = open(dest, 'rb') # Copy only vdi_size bytes data = f.read(vdi_size) of.write(data) if cached_image: # Create a cache file. If caching is enabled, kernel images do not have # to be fetched from glance. cached_image = KERNEL_DIR + '/' + cached_image logging.debug("copying kernel/ramdisk file from %s to /boot/guest/%s", dest, cached_image) cache_file = open(cached_image, 'wb') cache_file.write(data) cache_file.close() logging.debug("Done. Filename: %s", cached_image) f.close() of.close() logging.debug("Done. Filename: %s", filename) return filename def copy_vdi(session, args): vdi = exists(args, 'vdi-ref') size = exists(args, 'image-size') cached_image = optional(args, 'cached-image') # Use the uuid as a filename vdi_uuid = session.xenapi.VDI.get_uuid(vdi) copy_args = {'vdi_uuid': vdi_uuid, 'vdi_size': int(size), 'cached-image': cached_image} filename = with_vdi_in_dom0(session, vdi, False, lambda dev: _copy_vdi('/dev/%s' % dev, copy_args)) return filename def create_kernel_ramdisk(session, args): # Creates a copy of the kernel/ramdisk image if it is present in the # cache. If the image is not present in the cache, it does nothing. cached_image = exists(args, 'cached-image') image_uuid = exists(args, 'new-image-uuid') cached_image_filename = KERNEL_DIR + '/' + cached_image filename = KERNEL_DIR + '/' + image_uuid if os.path.isfile(cached_image_filename): shutil.copyfile(cached_image_filename, filename) logging.debug("Done. Filename: %s", filename) else: filename = "" logging.debug("Cached kernel/ramdisk image not found") return filename def _remove_file(filepath): try: os.remove(filepath) except OSError, exc: # noqa if exc.errno != errno.ENOENT: raise def remove_kernel_ramdisk(session, args): """Removes kernel and/or ramdisk from dom0's file system.""" kernel_file = optional(args, 'kernel-file') ramdisk_file = optional(args, 'ramdisk-file') if kernel_file: _remove_file(kernel_file) if ramdisk_file: _remove_file(ramdisk_file) return "ok" if __name__ == '__main__': XenAPIPlugin.dispatch({'copy_vdi': copy_vdi, 'create_kernel_ramdisk': create_kernel_ramdisk, 'remove_kernel_ramdisk': remove_kernel_ramdisk}) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/bandwidth.py0000664000175000017500000000355013160424533025331 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features """Fetch Bandwidth data from VIF network devices.""" import utils import dom0_pluginlib import re dom0_pluginlib.configure_logging('bandwidth') def _read_proc_net(): f = open('/proc/net/dev', 'r') try: return f.readlines() finally: f.close() def _get_bandwitdth_from_proc(): devs = [l.strip() for l in _read_proc_net()] # ignore headers devs = devs[2:] vif_pattern = re.compile("^vif(\d+)\.(\d+)") dlist = [d.split(':', 1) for d in devs if vif_pattern.match(d)] devmap = dict() for name, stats in dlist: slist = stats.split() dom, vifnum = name[3:].split('.', 1) dev = devmap.get(dom, {}) # Note, we deliberately swap in and out, as instance traffic # shows up inverted due to going though the bridge. (mdragon) dev[vifnum] = dict(bw_in=int(slist[8]), bw_out=int(slist[0])) devmap[dom] = dev return devmap def fetch_all_bandwidth(session): return _get_bandwitdth_from_proc() if __name__ == '__main__': utils.register_plugin_calls(fetch_all_bandwidth) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/workarounds.py0000664000175000017500000000315113160424533025740 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features """Handle the uploading and downloading of images via Glance.""" import os import shutil import utils import dom0_pluginlib dom0_pluginlib.configure_logging('workarounds') def _copy_vdis(sr_path, staging_path, vdi_uuids): seq_num = 0 for vdi_uuid in vdi_uuids: src = os.path.join(sr_path, "%s.vhd" % vdi_uuid) dst = os.path.join(staging_path, "%d.vhd" % seq_num) shutil.copyfile(src, dst) seq_num += 1 def safe_copy_vdis(session, sr_path, vdi_uuids, uuid_stack): staging_path = utils.make_staging_area(sr_path) try: _copy_vdis(sr_path, staging_path, vdi_uuids) return utils.import_vhds(sr_path, staging_path, uuid_stack) finally: utils.cleanup_staging_area(staging_path) if __name__ == '__main__': utils.register_plugin_calls(safe_copy_vdis) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/migration.py0000664000175000017500000000566613160424533025370 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features """ XenAPI Plugin for transferring data between host nodes """ import utils import dom0_pluginlib dom0_pluginlib.configure_logging('migration') logging = dom0_pluginlib.logging def move_vhds_into_sr(session, instance_uuid, sr_path, uuid_stack): """Moves the VHDs from their copied location to the SR.""" staging_path = "/images/instance%s" % instance_uuid imported_vhds = utils.import_vhds(sr_path, staging_path, uuid_stack) utils.cleanup_staging_area(staging_path) return imported_vhds def _rsync_vhds(instance_uuid, host, staging_path, user="root"): if not staging_path.endswith('/'): staging_path += '/' dest_path = '/images/instance%s/' % (instance_uuid) ip_cmd = ["/sbin/ip", "addr", "show"] output = utils.run_command(ip_cmd) if ' %s/' % host in output: # If copying to localhost, don't use SSH rsync_cmd = ["/usr/bin/rsync", "-av", "--progress", staging_path, dest_path] else: ssh_cmd = 'ssh -o StrictHostKeyChecking=no' rsync_cmd = ["/usr/bin/rsync", "-av", "--progress", "-e", ssh_cmd, staging_path, '%s@%s:%s' % (user, host, dest_path)] # NOTE(hillad): rsync's progress is carriage returned, requiring # universal_newlines for real-time output. rsync_proc = utils.make_subprocess(rsync_cmd, stdout=True, stderr=True, universal_newlines=True) while True: rsync_progress = rsync_proc.stdout.readline() if not rsync_progress: break logging.debug("[%s] %s" % (instance_uuid, rsync_progress)) utils.finish_subprocess(rsync_proc, rsync_cmd) def transfer_vhd(session, instance_uuid, host, vdi_uuid, sr_path, seq_num): """Rsyncs a VHD to an adjacent host.""" staging_path = utils.make_staging_area(sr_path) try: utils.prepare_staging_area(sr_path, staging_path, [vdi_uuid], seq_num=seq_num) _rsync_vhds(instance_uuid, host, staging_path) finally: utils.cleanup_staging_area(staging_path) if __name__ == '__main__': utils.register_plugin_calls(move_vhds_into_sr, transfer_vhd) os-xenapi-0.3.1/os_xenapi/dom0/etc/xapi.d/plugins/console.py0000664000175000017500000000521313160424533025025 0ustar jenkinsjenkins00000000000000#!/usr/bin/python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace # which means the Nova xenapi plugins must use only Python 2.4 features # TODO(sfinucan): Resolve all 'noqa' items once the above is no longer true """ To configure this plugin, you must set the following xenstore key: /local/logconsole/@ = "/var/log/xen/guest/console.%d" This can be done by running: xenstore-write /local/logconsole/@ "/var/log/xen/guest/console.%d" WARNING: You should ensure appropriate log rotation to ensure guests are not able to consume too much Dom0 disk space, and equally should not be able to stop other guests from logging. Adding and removing the following xenstore key will reopen the log, as will be required after a log rotate: /local/logconsole/ """ import base64 import logging import zlib import XenAPIPlugin import dom0_pluginlib dom0_pluginlib.configure_logging("console") CONSOLE_LOG_DIR = '/var/log/xen/guest' CONSOLE_LOG_FILE_PATTERN = CONSOLE_LOG_DIR + '/console.%d' MAX_CONSOLE_BYTES = 102400 SEEK_SET = 0 SEEK_END = 2 def _last_bytes(file_like_object): try: file_like_object.seek(-MAX_CONSOLE_BYTES, SEEK_END) except IOError, e: # noqa if e.errno == 22: file_like_object.seek(0, SEEK_SET) else: raise return file_like_object.read() def get_console_log(session, arg_dict): try: raw_dom_id = arg_dict['dom_id'] except KeyError: raise dom0_pluginlib.PluginError("Missing dom_id") try: dom_id = int(raw_dom_id) except ValueError: raise dom0_pluginlib.PluginError("Invalid dom_id") logfile = open(CONSOLE_LOG_FILE_PATTERN % dom_id, 'rb') try: try: log_content = _last_bytes(logfile) except IOError, e: # noqa msg = "Error reading console: %s" % e logging.debug(msg) raise dom0_pluginlib.PluginError(msg) finally: logfile.close() return base64.b64encode(zlib.compress(log_content)) if __name__ == "__main__": XenAPIPlugin.dispatch({"get_console_log": get_console_log}) os-xenapi-0.3.1/os_xenapi/client/0000775000175000017500000000000013160424745020015 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/client/i18n.py0000664000175000017500000000250113160424533021137 0ustar jenkinsjenkins00000000000000# Copyright 2016 Citrix. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """oslo.i18n integration module. See http://docs.openstack.org/developer/oslo.i18n/usage.html . """ import oslo_i18n DOMAIN = 'os-xenapi' _translators = oslo_i18n.TranslatorFactory(domain=DOMAIN) # The primary translation function using the well-known name "_" _ = _translators.primary # Translators for log levels. # # The abbreviated names are meant to reflect the usual use of a short # name like '_'. The "L" is for "log" and the other letter comes from # the level. _LI = _translators.log_info _LW = _translators.log_warning _LE = _translators.log_error _LC = _translators.log_critical def translate(value, user_locale): return oslo_i18n.translate(value, user_locale) def get_available_languages(): return oslo_i18n.get_available_languages(DOMAIN) os-xenapi-0.3.1/os_xenapi/client/__init__.py0000664000175000017500000000000013160424533022107 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/client/disk_management.py0000664000175000017500000000472413160424533023517 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def inject_ipxe_config(session, sr_path, vdi_uuid, boot_menu_url, ip_address, netmask, gateway, dns, mkisofs_cmd): session.call_plugin_serialized('ipxe.py', 'inject', sr_path, vdi_uuid, boot_menu_url, ip_address, netmask, gateway, dns, mkisofs_cmd) def copy_vdi(session, vdi_ref, vdi_size, image_id=None): args = {} args['vdi-ref'] = vdi_ref args['image-size'] = str(vdi_size) if image_id: args['cached-image'] = image_id session.call_plugin('kernel.py', 'copy_vdi', args) def create_kernel_ramdisk(session, image_id, new_image_uuid): args = {} args['cached-image'] = image_id args['new-image-uuid'] = new_image_uuid session.call_plugin('kernel.py', 'create_kernel_ramdisk', args) def remove_kernel_ramdisk(session, kernel_file=None, ramdisk_file=None): args = {} if kernel_file: args['kernel-file'] = kernel_file if ramdisk_file: args['ramdisk-file'] = ramdisk_file if args: session.call_plugin('kernel.py', 'remove_kernel_ramdisk', args) def safe_copy_vdis(session, sr_path, vdi_uuids, uuid_stack): return session.call_plugin_serialized( 'workarounds.py', 'safe_copy_vdis', sr_path, vdi_uuids, uuid_stack) def make_partition(session, dev, partition_start, partition_end): session.call_plugin_serialized('partition_utils.py', 'make_partition', dev, partition_start, partition_end) def mkfs(session, dev, partnum, fs_type, fs_label): session.call_plugin_serialized('partition_utils.py', 'mkfs', dev, partnum, fs_type, fs_label) def wait_for_dev(session, dev_path, max_seconds): return session.call_plugin_serialized('partition_utils.py', 'wait_for_dev', dev_path, max_seconds) os-xenapi-0.3.1/os_xenapi/client/objects.py0000664000175000017500000001151213160424533022013 0ustar jenkinsjenkins00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_concurrency import lockutils synchronized = lockutils.synchronized_with_prefix('os-xenapi-') class XenAPISessionObject(object): """Wrapper to make calling and mocking the session easier The XenAPI protocol is an XML RPC API that is based around the XenAPI database, and operations you can do on each of the objects stored in the database, such as VM, SR, VDI, etc. For more details see the XenAPI docs: http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/api/ Most, objects like VM, SR, VDI, etc, share a common set of methods: * vm_ref = session.VM.create(vm_rec) * vm_ref = session.VM.get_by_uuid(uuid) * session.VM.destroy(vm_ref) * vm_refs = session.VM.get_all() Each object also has specific messages, or functions, such as: * session.VM.clean_reboot(vm_ref) Each object has fields, like "VBDs" that can be fetched like this: * vbd_refs = session.VM.get_VBDs(vm_ref) You can get all the fields by fetching the full record. However please note this is much more expensive than just fetching the field you require: * vm_rec = session.VM.get_record(vm_ref) When searching for particular objects, you may be tempted to use get_all(), but this often leads to races as objects get deleted under your feet. It is preferable to use the undocumented: * vms = session.VM.get_all_records_where( 'field "is_control_domain"="true"') """ def __init__(self, session, name): self.session = session self.name = name def _call_method(self, method_name, *args): call = "%s.%s" % (self.name, method_name) return self.session.call_xenapi(call, *args) def __getattr__(self, method_name): return lambda *params: self._call_method(method_name, *params) class VM(XenAPISessionObject): """Virtual Machine.""" def __init__(self, session): super(VM, self).__init__(session, "VM") class VBD(XenAPISessionObject): """Virtual block device.""" def __init__(self, session): super(VBD, self).__init__(session, "VBD") def plug(self, vbd_ref, vm_ref): @synchronized('vbd-' + vm_ref) def synchronized_plug(): self._call_method("plug", vbd_ref) # NOTE(johngarbutt) we need to ensure there is only ever one # VBD.unplug or VBD.plug happening at once per VM # due to a bug in XenServer 6.1 and 6.2 synchronized_plug() def unplug(self, vbd_ref, vm_ref): @synchronized('vbd-' + vm_ref) def synchronized_unplug(): self._call_method("unplug", vbd_ref) # NOTE(johngarbutt) we need to ensure there is only ever one # VBD.unplug or VBD.plug happening at once per VM # due to a bug in XenServer 6.1 and 6.2 synchronized_unplug() class VDI(XenAPISessionObject): """Virtual disk image.""" def __init__(self, session): super(VDI, self).__init__(session, "VDI") class VIF(XenAPISessionObject): """Virtual Network Interface.""" def __init__(self, session): super(VIF, self).__init__(session, "VIF") class SR(XenAPISessionObject): """Storage Repository.""" def __init__(self, session): super(SR, self).__init__(session, "SR") class PBD(XenAPISessionObject): """Physical block device.""" def __init__(self, session): super(PBD, self).__init__(session, "PBD") class PIF(XenAPISessionObject): """Physical Network Interface.""" def __init__(self, session): super(PIF, self).__init__(session, "PIF") class VLAN(XenAPISessionObject): """VLAN.""" def __init__(self, session): super(VLAN, self).__init__(session, "VLAN") class Host(XenAPISessionObject): """XenServer hosts.""" def __init__(self, session): super(Host, self).__init__(session, "host") class Network(XenAPISessionObject): """Networks that VIFs are attached to.""" def __init__(self, session): super(Network, self).__init__(session, "network") class Pool(XenAPISessionObject): """Pool of hosts.""" def __init__(self, session): super(Pool, self).__init__(session, "pool") class Task(XenAPISessionObject): """XAPI task.""" def __init__(self, session): super(Task, self).__init__(session, "task") os-xenapi-0.3.1/os_xenapi/client/utils.py0000664000175000017500000000607013160424533021525 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from eventlet import greenio import os from oslo_log import log as logging from os_xenapi.client import exception LOG = logging.getLogger(__name__) def get_default_sr(session): pool_ref = session.pool.get_all()[0] sr_ref = session.pool.get_default_SR(pool_ref) if sr_ref: return sr_ref else: raise exception.NotFound('Cannot find default SR') def create_vdi(session, sr_ref, instance, name_label, disk_type, virtual_size, read_only=False): """Create a VDI record and returns its reference.""" vdi_ref = session.VDI.create( {'name_label': name_label, 'name_description': '', 'SR': sr_ref, 'virtual_size': str(virtual_size), 'type': 'User', 'sharable': False, 'read_only': read_only, 'xenstore_data': {}, 'other_config': _get_vdi_other_config(disk_type, instance=instance), 'sm_config': {}, 'tags': []} ) LOG.debug('Created VDI %(vdi_ref)s (%(name_label)s,' ' %(virtual_size)s, %(read_only)s) on %(sr_ref)s.', {'vdi_ref': vdi_ref, 'name_label': name_label, 'virtual_size': virtual_size, 'read_only': read_only, 'sr_ref': sr_ref}) return vdi_ref def _get_vdi_other_config(disk_type, instance=None): """Return metadata to store in VDI's other_config attribute. `nova_instance_uuid` is used to associate a VDI with a particular instance so that, if it becomes orphaned from an unclean shutdown of a compute-worker, we can safely detach it. """ other_config = {'nova_disk_type': disk_type} # create_vdi may be called simply while creating a volume # hence information about instance may or may not be present if instance: other_config['nova_instance_uuid'] = instance['uuid'] return other_config def create_pipe(): rpipe, wpipe = os.pipe() rfile = greenio.GreenPipe(rpipe, 'rb', 0) wfile = greenio.GreenPipe(wpipe, 'wb', 0) return rfile, wfile def get_vdi_import_path(session, task_ref, vdi_ref): session_id = session.get_session_id() str_fmt = '/import_raw_vdi?session_id={}&task_id={}&vdi={}&format=vhd' return str_fmt.format(session_id, task_ref, vdi_ref) def get_vdi_export_path(session, task_ref, vdi_ref): session_id = session.get_session_id() str_fmt = '/export_raw_vdi?session_id={}&task_id={}&vdi={}&format=vhd' return str_fmt.format(session_id, task_ref, vdi_ref) os-xenapi-0.3.1/os_xenapi/client/host_network.py0000664000175000017500000001071213160424533023111 0ustar jenkinsjenkins00000000000000# Copyright 2013 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def ovs_create_port(session, bridge, port, iface_id, mac, status): args = {'cmd': 'ovs_create_port', 'args': {'bridge': bridge, 'port': port, 'iface-id': iface_id, 'mac': mac, 'status': status} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ovs_add_port(session, bridge, port): args = {'cmd': 'ovs_add_port', 'args': {'bridge_name': bridge, 'port_name': port} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ovs_del_port(session, bridge, port): args = {'cmd': 'ovs_del_port', 'args': {'bridge_name': bridge, 'port_name': port} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ovs_del_br(session, bridge_name): args = {'cmd': 'ovs_del_br', 'args': {'bridge_name': bridge_name} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def brctl_add_if(session, bridge_name, interface_name): args = {'cmd': 'brctl_add_if', 'args': {'bridge_name': bridge_name, 'interface_name': interface_name} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def brctl_del_if(session, bridge_name, interface_name): args = {'cmd': 'brctl_del_if', 'args': {'bridge_name': bridge_name, 'interface_name': interface_name} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def brctl_del_br(session, bridge_name): args = {'cmd': 'brctl_del_br', 'args': {'bridge_name': bridge_name} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def brctl_add_br(session, bridge_name): args = {'cmd': 'brctl_add_br', 'args': {'bridge_name': bridge_name} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def brctl_set_fd(session, bridge_name, fd): args = {'cmd': 'brctl_set_fd', 'args': {'bridge_name': bridge_name, 'fd': fd} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def brctl_set_stp(session, bridge_name, stp_opt): args = {'cmd': 'brctl_set_stp', 'args': {'bridge_name': bridge_name, 'option': stp_opt} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ip_link_add_veth_pair(session, dev1_name, dev2_name): args = {'cmd': 'ip_link_add_veth_pair', 'args': {'dev1_name': dev1_name, 'dev2_name': dev2_name} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ip_link_del_dev(session, device): args = {'cmd': 'ip_link_del_dev', 'args': {'device_name': device} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ip_link_get_dev(session, device): args = {'cmd': 'ip_link_get_dev', 'args': {'device_name': device} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ip_link_set_dev(session, device, option): args = {'cmd': 'ip_link_set_dev', 'args': {'device_name': device, 'option': option} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def ip_link_set_promisc(session, device, promisc_option): args = {'cmd': 'ip_link_set_promisc', 'args': {'device_name': device, 'option': promisc_option} } session.call_plugin_serialized('xenhost.py', 'network_config', args) def fetch_all_bandwidth(session): return session.call_plugin_serialized('bandwidth.py', 'fetch_all_bandwidth') os-xenapi-0.3.1/os_xenapi/client/host_agent.py0000664000175000017500000000353613160424533022524 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def version(session, uuid, dom_id, timeout): args = {'id': uuid, 'dom_id': dom_id, 'timeout': timeout} return session.call_plugin('agent.py', 'version', args) def key_init(session, uuid, dom_id, timeout, pub=''): args = {'id': uuid, 'dom_id': dom_id, 'timeout': timeout, 'pub': pub} return session.call_plugin('agent.py', 'key_init', args) def agent_update(session, uuid, dom_id, timeout, url='', md5sum=''): args = {'id': uuid, 'dom_id': dom_id, 'timeout': timeout, 'url': url, 'md5sum': md5sum} return session.call_plugin('agent.py', 'agentupdate', args) def password(session, uuid, dom_id, timeout, enc_pass=''): args = {'id': uuid, 'dom_id': dom_id, 'timeout': timeout, 'enc_pass': enc_pass} return session.call_plugin('agent.py', 'password', args) def inject_file(session, uuid, dom_id, timeout, b64_path='', b64_contents=''): args = {'id': uuid, 'dom_id': dom_id, 'timeout': timeout, 'b64_path': b64_path, 'b64_contents': b64_contents} return session.call_plugin('agent.py', 'inject_file', args) def reset_network(session, uuid, dom_id, timeout): args = {'id': uuid, 'dom_id': dom_id, 'timeout': timeout} return session.call_plugin('agent.py', 'resetnetwork', args) os-xenapi-0.3.1/os_xenapi/client/host_management.py0000664000175000017500000000236513160424533023541 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def set_host_enabled(session, enabled): args = {"enabled": enabled} return session.call_plugin('xenhost.py', 'set_host_enabled', args) def get_host_uptime(session): return session.call_plugin('xenhost.py', 'host_uptime', {}) def get_host_data(session): return session.call_plugin('xenhost.py', 'host_data', {}) def get_pci_type(session, pci_device): return session.call_plugin_serialized('xenhost.py', 'get_pci_type', pci_device) def get_pci_device_details(session): return session.call_plugin_serialized('xenhost.py', 'get_pci_device_details') os-xenapi-0.3.1/os_xenapi/client/XenAPI.py0000664000175000017500000002022613160424533021450 0ustar jenkinsjenkins00000000000000# Copyright 2013 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import gettext import socket import sys if sys.version_info[0] == 2: import httplib as httpclient import xmlrpclib as xmlrpcclient else: import http.client as httpclient import xmlrpc.client as xmlrpcclient translation = gettext.translation('xen-xm', fallback=True) API_VERSION_1_1 = '1.1' API_VERSION_1_2 = '1.2' def below_python27(): if sys.version_info[0] <= 2 and sys.version_info[1] < 7: return True else: return False class Failure(Exception): def __init__(self, details): self.details = details def __str__(self): try: return str(self.details) except Exception: # To support py2.4/py2.7/py3 together, extract exception via sys # py2.4: except Exception, exn # py2.7/py3: except Exception as exn type, value = sys.exc_info()[:2] sys.stderr.write("%s, %s" % (type, value)) return "Xen-API failure: %s, %s" % (type, value) def _details_map(self): return dict([(str(i), self.details[i]) for i in range(len(self.details))]) class UDSHTTPConnection(httpclient.HTTPConnection): """HTTPConnection subclass to allow HTTP over Unix domain sockets. """ def connect(self): path = self.host.replace("_", "/") self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(path) class UDSTransport(xmlrpcclient.Transport): def __init__(self, use_datetime=0): if not below_python27(): xmlrpcclient.Transport.__init__(self, use_datetime) self._use_datetime = use_datetime self._connection = (None, None) self._extra_headers = [] def add_extra_header(self, key, value): self._extra_headers += [(key, value)] def make_connection(self, host): if below_python27(): # Python 2.4 compatibility class UDSHTTP(httpclient.HTTP): _connection_class = UDSHTTPConnection return UDSHTTP(host) else: return UDSHTTPConnection(host) def send_request(self, connection, handler, request_body): connection.putrequest("POST", handler) for key, value in self._extra_headers: connection.putheader(key, value) # Just a "constant" that we use to decide whether to retry the RPC _RECONNECT_AND_RETRY = object() class Session(xmlrpcclient.ServerProxy): """A server proxy and session manager for communicating with xapi. Example: session = Session('http://localhost/') session.login_with_password('me', 'password') session.xenapi.VM.start(vm_uuid) session.xenapi.session.logout() """ def __init__(self, uri, transport=None, encoding=None, verbose=0, allow_none=0): xmlrpcclient.ServerProxy.__init__(self, uri, transport=transport, encoding=encoding, verbose=verbose, allow_none=allow_none) self.transport = transport self._session = None self.last_login_method = None self.last_login_params = None self.API_version = API_VERSION_1_1 def xenapi_request(self, methodname, params): if methodname.startswith('login'): self._login(methodname, params) return None if methodname == 'logout' or methodname == 'session.logout': self._logout() return None retry_count = 0 while retry_count < 3: full_params = (self._session,) + params result = _parse_result(getattr(self, methodname)(*full_params)) if result is _RECONNECT_AND_RETRY: retry_count += 1 if self.last_login_method: self._login(self.last_login_method, self.last_login_params) else: raise xmlrpcclient.Fault(401, 'You must log in') else: return result raise xmlrpcclient.Fault( 500, 'Tried 3 times to get a valid session, but failed') def _login(self, method, params): try: result = _parse_result( getattr(self, 'session.%s' % method)(*params)) if result is _RECONNECT_AND_RETRY: raise xmlrpcclient.Fault( 500, 'Received SESSION_INVALID when logging in') self._session = result self.last_login_method = method self.last_login_params = params self.API_version = self._get_api_version() except socket.error: e = sys.exc_info()[1] if e.errno == socket.errno.ETIMEDOUT: raise xmlrpcclient.Fault(504, 'The connection timed out') else: raise e def _logout(self): try: if self.last_login_method.startswith("slave_local"): return _parse_result(self.session.local_logout(self._session)) else: return _parse_result(self.session.logout(self._session)) finally: self._session = None self.last_login_method = None self.last_login_params = None self.API_version = API_VERSION_1_1 def _get_api_version(self): pool = self.xenapi.pool.get_all()[0] host = self.xenapi.pool.get_master(pool) major = self.xenapi.host.get_API_version_major(host) minor = self.xenapi.host.get_API_version_minor(host) return "%s.%s" % (major, minor) def __getattr__(self, name): if name == 'handle': return self._session elif name == 'xenapi': return _Dispatcher(self.API_version, self.xenapi_request, None) elif name.startswith('login') or name.startswith('slave_local'): return lambda *params: self._login(name, params) elif name == 'logout': return _Dispatcher(self.API_version, self.xenapi_request, "logout") else: return xmlrpcclient.ServerProxy.__getattr__(self, name) def xapi_local(): return Session("http://_var_xapi_xapi/", transport=UDSTransport()) def _parse_result(result): if type(result) != dict or 'Status' not in result: raise xmlrpcclient.Fault( 500, 'Missing Status in response from server' + result) if result['Status'] == 'Success': if 'Value' in result: return result['Value'] else: raise xmlrpcclient.Fault( 500, 'Missing Value in response from server') else: if 'ErrorDescription' in result: if result['ErrorDescription'][0] == 'SESSION_INVALID': return _RECONNECT_AND_RETRY else: raise Failure(result['ErrorDescription']) else: raise xmlrpcclient.Fault( 500, 'Missing ErrorDescription in response from server') # Based upon _Method from xmlrpclib. class _Dispatcher(object): def __init__(self, API_version, send, name): self.__API_version = API_version self.__send = send self.__name = name def __repr__(self): if self.__name: return '' % self.__name else: return '' def __getattr__(self, name): if self.__name is None: return _Dispatcher(self.__API_version, self.__send, name) else: return _Dispatcher(self.__API_version, self.__send, "%s.%s" % (self.__name, name)) def __call__(self, *args): return self.__send(self.__name, args) os-xenapi-0.3.1/os_xenapi/client/session.py0000664000175000017500000003625213160424533022055 0ustar jenkinsjenkins00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ast import contextlib try: import cPickle as pickle except ImportError: import pickle import errno import socket import time from eventlet import queue from eventlet import timeout from oslo_log import log as logging from oslo_utils import versionutils from six.moves import http_client from six.moves import urllib try: import xmlrpclib except ImportError: import six.moves.xmlrpc_client as xmlrpclib from os_xenapi.client import exception from os_xenapi.client.i18n import _ from os_xenapi.client.i18n import _LW from os_xenapi.client import objects as cli_objects from os_xenapi.client import XenAPI LOG = logging.getLogger(__name__) def apply_session_helpers(session): session.VM = cli_objects.VM(session) session.SR = cli_objects.SR(session) session.VDI = cli_objects.VDI(session) session.VIF = cli_objects.VIF(session) session.VBD = cli_objects.VBD(session) session.PBD = cli_objects.PBD(session) session.PIF = cli_objects.PIF(session) session.VLAN = cli_objects.VLAN(session) session.host = cli_objects.Host(session) session.network = cli_objects.Network(session) session.pool = cli_objects.Pool(session) session.task = cli_objects.Task(session) class XenAPISession(object): """The session to invoke XenAPI SDK calls.""" # This is not a config option as it should only ever be # changed in development environments. # MAJOR VERSION: Incompatible changes with the plugins # MINOR VERSION: Compatible changes, new plguins, etc PLUGIN_REQUIRED_VERSION = '2.1' def __init__(self, url, user, pw, originator="os-xenapi", timeout=10, concurrent=5): """Initialize session for connection with XenServer/Xen Cloud Platform :param url: URL for connection to XenServer/Xen Cloud Platform :param user: Username for connection to XenServer/Xen Cloud Platform :param pw: Password for connection to XenServer/Xen Cloud Platform :param originator: Specify the caller for this API :param timeout: Timeout in seconds for XenAPI login :param concurrent: Maximum concurrent XenAPI connections """ self.XenAPI = XenAPI self.originator = originator self.timeout = timeout self.concurrent = concurrent self._sessions = queue.Queue() self.host_checked = False self.is_slave = False self.ip = self._get_ip_from_url(url) self.url = url self.master_url = self._create_first_session(url, user, pw) self._populate_session_pool(self.master_url, user, pw) self.host_ref = self._get_host_ref(self.ip) self.host_uuid = self._get_host_uuid() self.product_version, self.product_brand = \ self._get_product_version_and_brand() self._verify_plugin_version() self.platform_version = self._get_platform_version() self._cached_xsm_sr_relaxed = None apply_session_helpers(self) def _login_with_password(self, user, pw, session): login_exception = XenAPI.Failure(_("Unable to log in to XenAPI " "(is the Dom0 disk full?)")) with timeout.Timeout(self.timeout, login_exception): session.login_with_password(user, pw, self.PLUGIN_REQUIRED_VERSION, self.originator) def _verify_plugin_version(self): requested_version = self.PLUGIN_REQUIRED_VERSION current_version = self.call_plugin_serialized( 'dom0_plugin_version.py', 'get_version') if not versionutils.is_compatible(requested_version, current_version): raise XenAPI.Failure( _("Plugin version mismatch (Expected %(exp)s, got %(got)s)") % {'exp': requested_version, 'got': current_version}) def _create_first_session(self, url, user, pw): try: session = self._create_session_and_login(url, user, pw) except XenAPI.Failure as e: # if user and pw of the master are different, we're doomed! if e.details[0] == 'HOST_IS_SLAVE': master = e.details[1] url = self.swap_xapi_host(url, master) session = self._create_session_and_login(url, user, pw) self.is_slave = True else: raise self._sessions.put(session) return url def _get_ip_from_url(self, url): url_parts = urllib.parse.urlparse(url) return socket.gethostbyname(url_parts.netloc) def swap_xapi_host(self, url, host_addr): """Replace the XenServer address present in 'url' with 'host_addr'.""" temp_url = urllib.parse.urlparse(url) return url.replace(temp_url.hostname, '%s' % host_addr) def _populate_session_pool(self, url, user, pw): for i in range(self.concurrent - 1): session = self._create_session_and_login(url, user, pw) self._sessions.put(session) def _get_host_uuid(self): with self._get_session() as session: return session.xenapi.host.get_uuid(self.host_ref) def _get_product_version_and_brand(self): """Return tuple of (major, minor, rev) This tuple is for host version and product brand. """ software_version = self._get_software_version() product_version_str = software_version.get('product_version') # Product version is only set in some cases (e.g. XCP, XenServer) and # not in others (e.g. xenserver-core, XAPI-XCP). # In these cases, the platform version is the best number to use. if product_version_str is None: product_version_str = software_version.get('platform_version', '0.0.0') product_brand = software_version.get('product_brand') product_version =\ versionutils.convert_version_to_tuple(product_version_str) return product_version, product_brand def _get_platform_version(self): """Return a tuple of (major, minor, rev) for the host version""" software_version = self._get_software_version() platform_version_str = software_version.get('platform_version', '0.0.0') platform_version = versionutils.convert_version_to_tuple( platform_version_str) return platform_version def _get_software_version(self): return self.call_xenapi('host.get_software_version', self.host_ref) def get_session_id(self): """Return a string session_id. Used for vnc consoles.""" with self._get_session() as session: return str(session._session) @contextlib.contextmanager def _get_session(self): """Return exclusive session for scope of with statement.""" session = self._sessions.get() try: yield session finally: self._sessions.put(session) def _get_host_ref(self, host_ip): with self._get_session() as session: if self.is_slave: rec_dict = session.xenapi.PIF.get_all_records_where( 'field "IP"="%s"' % host_ip) if not rec_dict: raise XenAPI.Failure( ("ERROR, couldn't find host ref with ip \ %(slave_ip)s ") % {'slave_ip': host_ip}) if len(rec_dict) > 1: raise XenAPI.Failure( ("ERROR, find more than one host ref with ip \ %(slave_ip)s ") % {'slave_ip': host_ip}) value = list(rec_dict.values())[0] return value['host'] else: return session.xenapi.session.get_this_host(session.handle) def call_xenapi(self, method, *args): """Call the specified XenAPI method on a background thread.""" with self._get_session() as session: return session.xenapi_request(method, args) def call_plugin(self, plugin, fn, args): """Call host.call_plugin on a background thread.""" # NOTE(armando): pass the host uuid along with the args so that # the plugin gets executed on the right host when using XS pools args['host_uuid'] = self.host_uuid if not plugin.endswith('.py'): plugin = '%s.py' % plugin with self._get_session() as session: return self._unwrap_plugin_exceptions( session.xenapi.host.call_plugin, self.host_ref, plugin, fn, args) def call_plugin_serialized(self, plugin, fn, *args, **kwargs): params = {'params': pickle.dumps(dict(args=args, kwargs=kwargs))} rv = self.call_plugin(plugin, fn, params) return pickle.loads(rv) def call_plugin_serialized_with_retry(self, plugin, fn, num_retries, callback, retry_cb=None, *args, **kwargs): """Allows a plugin to raise RetryableError so we can try again.""" attempts = num_retries + 1 sleep_time = 0.5 for attempt in range(1, attempts + 1): try: if attempt > 1: time.sleep(sleep_time) sleep_time = min(2 * sleep_time, 15) callback_result = None if callback: callback_result = callback(kwargs) msg = ('%(plugin)s.%(fn)s attempt %(attempt)d/%(attempts)d, ' 'callback_result: %(callback_result)s') LOG.debug(msg, {'plugin': plugin, 'fn': fn, 'attempt': attempt, 'attempts': attempts, 'callback_result': callback_result}) return self.call_plugin_serialized(plugin, fn, *args, **kwargs) except XenAPI.Failure as exc: if self._is_retryable_exception(exc, fn): LOG.warning(_LW('%(plugin)s.%(fn)s failed. ' 'Retrying call.'), {'plugin': plugin, 'fn': fn}) if retry_cb: retry_cb(exc=exc) else: raise except socket.error as exc: if exc.errno == errno.ECONNRESET: LOG.warning(_LW('Lost connection to XenAPI during call to ' '%(plugin)s.%(fn)s. Retrying call.'), {'plugin': plugin, 'fn': fn}) if retry_cb: retry_cb(exc=exc) else: raise raise exception.PluginRetriesExceeded(num_retries=num_retries) def _is_retryable_exception(self, exc, fn): _type, method, error = exc.details[:3] if error == 'RetryableError': LOG.debug("RetryableError, so retrying %(fn)s", {'fn': fn}, exc_info=True) return True if "signal" in method: LOG.debug("Error due to a signal, retrying %(fn)s", {'fn': fn}, exc_info=True) return True else: return False def _create_session(self, url): """Stubout point. This can be replaced with a mock session.""" self.is_local_connection = url == "unix://local" if self.is_local_connection: return XenAPI.xapi_local() return XenAPI.Session(url) def _create_session_and_login(self, url, user, pw): session = self._create_session(url) self._login_with_password(user, pw, session) return session def _unwrap_plugin_exceptions(self, func, *args, **kwargs): """Parse exception details.""" try: return func(*args, **kwargs) except XenAPI.Failure as exc: LOG.debug("Got exception: %s", exc) if (len(exc.details) == 4 and exc.details[0] == 'XENAPI_PLUGIN_EXCEPTION' and exc.details[2] == 'Failure'): params = None try: params = ast.literal_eval(exc.details[3]) except Exception: raise exc raise XenAPI.Failure(params) else: raise except xmlrpclib.ProtocolError as exc: LOG.debug("Got exception: %s", exc) raise def get_rec(self, record_type, ref): try: return self.call_xenapi('%s.get_record' % record_type, ref) except XenAPI.Failure as e: if e.details[0] != 'HANDLE_INVALID': raise return None def get_all_refs_and_recs(self, record_type): """Retrieve all refs and recs for a Xen record type. Handles race-conditions where the record may be deleted between the `get_all` call and the `get_record` call. """ return self.call_xenapi('%s.get_all_records' % record_type).items() @contextlib.contextmanager def custom_task(self, label, desc=''): """Return exclusive session for scope of with statement.""" name = '%s-%s' % (self.originator, label) task_ref = self.call_xenapi("task.create", name, desc) try: LOG.debug('Created task %s with ref %s', name, task_ref) yield task_ref finally: self.call_xenapi("task.destroy", task_ref) LOG.debug('Destroyed task ref %s', task_ref) @contextlib.contextmanager def http_connection(self): conn = None xs_url = urllib.parse.urlparse(self.url) LOG.debug("Creating http(s) connection to %s", self.url) if xs_url.scheme == 'http': conn = http_client.HTTPConnection(xs_url.netloc) elif xs_url.scheme == 'https': conn = http_client.HTTPSConnection(xs_url.netloc) conn.connect() try: yield conn finally: conn.close() def is_xsm_sr_check_relaxed(self): if self._cached_xsm_sr_relaxed is None: config_value = self.call_plugin('config_file', 'get_val', dict(key='relax-xsm-sr-check')) if not config_value: version_str = '.'.join(str(v) for v in self.platform_version) if versionutils.is_compatible('2.1.0', version_str, same_major=False): self._cached_xsm_sr_relaxed = True else: self._cached_xsm_sr_relaxed = False else: self._cached_xsm_sr_relaxed = config_value.lower() == 'true' return self._cached_xsm_sr_relaxed os-xenapi-0.3.1/os_xenapi/client/host_glance.py0000664000175000017500000000327513160424533022657 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os_xenapi.client import exception from os_xenapi.client import XenAPI def download_vhd(session, num_retries, callback, retry_cb, image_id, sr_path, extra_headers, uuid_stack=''): args = {'image_id': image_id, 'sr_path': sr_path, 'extra_headers': extra_headers, 'uuid_stack': uuid_stack} return session.call_plugin_serialized_with_retry( 'glance.py', 'download_vhd2', num_retries, callback, retry_cb, **args) def upload_vhd(session, num_retries, callback, retry_cb, image_id, sr_path, extra_headers, vdi_uuids='', properties={}): args = {'image_id': image_id, 'sr_path': sr_path, 'extra_headers': extra_headers, 'vdi_uuids': vdi_uuids, 'properties': properties} try: session.call_plugin_serialized_with_retry( 'glance.py', 'upload_vhd2', num_retries, callback, retry_cb, **args) except XenAPI.Failure as exc: if (len(exc.details) == 4 and exc.details[3] == 'ImageNotFound'): raise exception.PluginImageNotFound(image_id=image_id) else: raise os-xenapi-0.3.1/os_xenapi/client/exception.py0000664000175000017500000000462313160424533022365 0ustar jenkinsjenkins00000000000000# Copyright 2016 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os_xenapi.client.i18n import _ class OsXenApiException(Exception): """Base OsXenapi Exception To correctly use this class, inherit from it and define a 'msg_fmt' property. That msg_fmt will get printf'd with the keyword arguments provided to the constructor. """ msg_fmt = _("An unknown exception occurred.") code = 500 def __init__(self, message=None, **kwargs): self.kwargs = kwargs if 'code' not in self.kwargs: try: self.kwargs['code'] = self.code except AttributeError: pass if not message: message = self.msg_fmt % kwargs self.message = message super(OsXenApiException, self).__init__(message) def format_message(self): # NOTE(mrodden): use the first argument to the python Exception object # which should be our full NovaException message, (see __init__) return self.args[0] class PluginRetriesExceeded(OsXenApiException): msg_fmt = _("Number of retries to plugin (%(num_retries)d) exceeded.") class PluginImageNotFound(OsXenApiException): msg_fmt = _("Image (%(image_id)s) not found.") class SessionLoginTimeout(OsXenApiException): msg_fmt = _("Unable to log in to XenAPI (is the Dom0 disk full?)") class InvalidImage(OsXenApiException): msg_fmt = _("Image is invalid: details is - (%(details)s)") class HostConnectionFailure(OsXenApiException): msg_fmt = _("Failed connecting to host %(host_netloc)s") class NotFound(OsXenApiException): msg_fmt = _("Not found error: %s") class VdiImportFailure(OsXenApiException): msg_fmt = _("Failed importing VDI from VHD stream: vdi_ref=(%(vdi_ref)s)") class VhdDiskTypeNotSupported(OsXenApiException): msg_fmt = _("Not supported VHD disk type: type=(%(disk_type)s)") os-xenapi-0.3.1/os_xenapi/client/vm_management.py0000664000175000017500000000233613160424533023204 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def get_console_log(session, dom_id): return session.call_plugin('console.py', 'get_console_log', {'dom_id': dom_id}) def transfer_vhd(session, instance_uuid, host, vdi_uuid, sr_path, seq_num): session.call_plugin_serialized('migration.py', 'transfer_vhd', instance_uuid, host, vdi_uuid, sr_path, seq_num) def receive_vhd(session, instance_uuid, sr_path, uuid_stack): return session.call_plugin_serialized('migration.py', 'move_vhds_into_sr', instance_uuid, sr_path, uuid_stack) os-xenapi-0.3.1/os_xenapi/client/host_xenstore.py0000664000175000017500000000225713160424533023274 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def read_record(session, dom_id, path, ignore_missing_path=True): args = {'dom_id': dom_id, 'path': path, 'ignore_missing_path': 'True' if ignore_missing_path else 'False'} return session.call_plugin('xenstore.py', 'read_record', args) def delete_record(session, dom_id, path): args = {'dom_id': dom_id, 'path': path} return session.call_plugin('xenstore.py', 'delete_record', args) def write_record(session, dom_id, path, value): args = {'dom_id': dom_id, 'path': path, 'value': value} return session.call_plugin('xenstore.py', 'write_record', args) os-xenapi-0.3.1/os_xenapi/client/image/0000775000175000017500000000000013160424745021077 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/client/image/__init__.py0000664000175000017500000000217213160424533023205 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os_xenapi.client.image import vdi_handler def stream_to_vdis(context, session, instance, host_url, data): handler = vdi_handler.ImageStreamToVDIs(context, session, instance, host_url, data) handler.start() return handler.vdis def stream_from_vdis(context, session, instance, host_url, vdi_uuids): handler = vdi_handler.GenerateImageStream(context, session, instance, host_url, vdi_uuids) return handler.get_image_data() os-xenapi-0.3.1/os_xenapi/client/image/vhd_utils.py0000664000175000017500000002356513160424533023460 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import struct from os_xenapi.client import exception as xenapi_except LOG = logging.getLogger(__name__) FMT_TO_LEN = { '!B': 1, '!H': 2, '!I': 4, '!Q': 8, } DISK_TYPE = {'None': 0, 'Reserved_1': 1, 'Fixed hard disk': 2, 'Dynamic hard disk': 3, 'Differencing hard disk': 4, 'Reserved_5': 5, 'Reserved_6': 6, } class VHDFileParser(object): # This class supplies utils to parse different parts of a VHD file. # It follows the following VHD spec: # https://www.microsoft.com/en-us/download/confirmation.aspx?id=23850 def __init__(self, file_obj): self.src_file = file_obj self.cached_buff = b'' def get_disk_type_name(self, type_val): for type_name in DISK_TYPE: if (DISK_TYPE[type_name] == type_val): return type_name def cached_read(self, read_size): # the data will be cached in the buffer. data = self.src_file.read(read_size) if data: self.cached_buff += data return data def parse_vhd_footer(self): footer_raw_data = self.cached_read(VHDFooter.VHD_HDF_SIZE) return VHDFooter(footer_raw_data) class VHDDynDiskParser(VHDFileParser): """The class presents the Dynamical Disk file: The Dynamic Hard Disk Image format is as below: +-----------------------------------------------+ |Mirror Image of Hard drive footer (512 bytes) | +-----------------------------------------------+ |Dynamic Disk Header (1024 bytes) | +-----------------------------------------------+ | padding bytes | |(Table Offset in Dynamic Disk Header determines| | where the BAT starts from) | +-----------------------------------------------+ |BAT (Block Allocation Table) | +-----------------------------------------------+ |Padding bytes to ensure the bitmap+Data blocks | |start from 512-byte sector boundary. | +-----------------------------------------------+ | bitmap 1 (512 bytes) | | Data Block 1 | +-----------------------------------------------+ | bitmap 2 (512 bytes) | | Data Block 2 | +-----------------------------------------------+ | ... | +-----------------------------------------------+ | bitmap 1 (512 bytes) | | Data Block n | +-----------------------------------------------+ | Hard drive footer (512 bytes) | +-----------------------------------------------+ """ SIZE_OF_BITMAP = 512 def __init__(self, file_obj): self.src_file = file_obj self.cached_buff = b'' self.footer = self.parse_vhd_footer() dyn_disk_type = DISK_TYPE['Dynamic hard disk'] if self.footer.disk_type != dyn_disk_type: disk_type_name = self.get_disk_type_name( self.footer.disk_type) raise xenapi_except.VhdDiskTypeNotSupported( disk_type=disk_type_name) self.DynDiskHdr = self._get_dynamic_disk_header() self.BatPaddingData = self._get_bat_padding() self.Bat = self._get_block_allocation_table() def _get_dynamic_disk_header(self): ddh_raw_data = self.cached_read(VHDDynDiskHdr.VHD_DDH_SIZE) return VHDDynDiskHdr(ddh_raw_data) def _get_bat_padding(self): PaddingData = None len_padding = (self.DynDiskHdr.bat_offset - VHDFooter.VHD_HDF_SIZE - VHDDynDiskHdr.VHD_DDH_SIZE) if len_padding > 0: PaddingData = self.cached_read(len_padding) return PaddingData def _get_block_allocation_table(self): bat_ent_size = FMT_TO_LEN[VHDBlockAllocTable.FMT_BAT_ENT] bat_size = bat_ent_size * self.DynDiskHdr.bat_max_entries raw_data = self.cached_read(bat_size) return VHDBlockAllocTable(raw_data) def get_vhd_file_size(self): # it will calculate the VHD file's size basing on the first # non data block sections. It's useful in the scenario where # the VHD file's data is passed via streaming. We can # calculate the file size before we get all data. But please # note it only works when the data blocks all are continuously # places in the VHD file (no holes). The VHD files exported # by invoking XenAPI should meet this prerequisite. # The "bitmap+Data blocks" should start from the point which is # after the Block Allocation Table and also meets the 512 bytes # boundary. bat_offset = self.DynDiskHdr.bat_offset bat_size = len(self.Bat.raw_data) data_offset = bat_offset + bat_size if data_offset % 512 != 0: data_offset = (data_offset / 512 + 1) * 512 bitmap_size = VHDDynDiskParser.SIZE_OF_BITMAP block_size = self.DynDiskHdr.block_size valid_blocks = self.Bat.num_valid_bat_entries data_size = (bitmap_size + block_size) * valid_blocks file_size = data_offset + data_size + VHDFooter.VHD_HDF_SIZE LOG.debug("Calcuated file_size = {}: bat_offset = {}; " "bat_size = {}; data_offset = {}; data_size = {}; " "footer_size = {}".format(file_size, bat_offset, bat_size, data_offset, data_size, VHDFooter.VHD_HDF_SIZE)) return file_size class VHDFooter(object): # VHD Hard Disk Footer VHD_HDF_SIZE = 512 HDF_LAYOUT = { 'current_size': { 'offset': 48, 'format': '!Q'}, 'disk_type': { 'offset': 60, 'format': '!I'}, } def __init__(self, raw_data): self.raw_data = raw_data self._parse_data() def _parse_data(self): hdf_layout = VHDFooter.HDF_LAYOUT for field in hdf_layout: format = hdf_layout[field]['format'] pos_start = hdf_layout[field]['offset'] pos_end = pos_start + FMT_TO_LEN[format] (value, ) = struct.unpack(format, self.raw_data[pos_start: pos_end]) setattr(self, field, value) class VHDDynDiskHdr(object): """VHD Dynamic Disk Header: The Dynamic Disk Header(DDH) layout is as below: |**fields** | **size**| |Cookie | 8 | |Data Offset | 8 | |*Table Offset* | 8 | |Header Version | 4 | |*Max Table Entries* | 4 | |*Block Size* | 4 | |Checksum | 4 | |Parent Unique ID | 16 | |Parent Time Stamp | 4 | |Reserved | 4 | |Parent Unicode Name | 512 | |Parent Locator Entry 1 | 24 | |Parent Locator Entry 2 | 24 | |Parent Locator Entry 3 | 24 | |Parent Locator Entry 4 | 24 | |Parent Locator Entry 5 | 24 | |Parent Locator Entry 6 | 24 | |Parent Locator Entry 7 | 24 | |Parent Locator Entry 8 | 24 | |Reserved | 256 | """ VHD_DDH_SIZE = 1024 DDH_LAYOUT = { 'bat_offset': { 'offset': 16, 'format': '!Q'}, 'bat_max_entries': {'offset': 28, 'format': '!I'}, 'block_size': { 'offset': 32, 'format': '!I'}, } def __init__(self, raw_data): self.raw_data = raw_data self._parse_data() def _parse_data(self): ddh_layout = VHDDynDiskHdr.DDH_LAYOUT for field in ddh_layout: format = ddh_layout[field]['format'] pos_start = ddh_layout[field]['offset'] pos_end = pos_start + FMT_TO_LEN[format] (value,) = struct.unpack(format, self.raw_data[pos_start: pos_end]) setattr(self, field, value) class VHDBlockAllocTable(object): # VHD Block Allocation Table FMT_BAT_ENT = '!I' def __init__(self, raw_data): self.raw_data = raw_data self._parse_data() def _parse_data(self): self.num_valid_bat_entries = self.get_valid_bat_entries() def get_valid_bat_entries(self): # Calculate the number of valid BAT entries. # It will go through all BAT entries. Those entries whose value is not # the default value - 0xFFFFFFFF will be treated as valid. num_of_valid_bat_ent = 0 size_of_bat_entry = FMT_TO_LEN[VHDBlockAllocTable.FMT_BAT_ENT] for i in range(0, len(self.raw_data), size_of_bat_entry): (value, ) = struct.unpack(VHDBlockAllocTable.FMT_BAT_ENT, self.raw_data[i: i + size_of_bat_entry]) if value != 0xFFFFFFFF: num_of_valid_bat_ent += 1 return num_of_valid_bat_ent os-xenapi-0.3.1/os_xenapi/client/image/vdi_handler.py0000664000175000017500000002534313160424533023732 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet import logging from six.moves import http_client as httplib import six.moves.urllib.parse as urlparse import tarfile from os_xenapi.client import exception from os_xenapi.client.image import vhd_utils from os_xenapi.client import utils LOG = logging.getLogger(__name__) CHUNK_SIZE = 4 * 1024 * 1024 class ImageStreamToVDIs(object): def __init__(self, context, session, instance, host_url, image_stream_in): self.context = context self.session = session self.instance = instance self.host_url = urlparse.urlparse(host_url) self.image_stream = image_stream_in self.task_ref = None self.vdis = {} def _clean(self): if self.task_ref: self.session.task.destroy(self.task_ref) def start(self): label = 'VDI_IMPORT_for_' + self.instance['name'] desc = 'Importing VDI for instance: %s' % self.instance['name'] self.task_ref = self.session.task.create(label, desc) try: with tarfile.open(mode="r|gz", fileobj=self.image_stream) as tar: for vhd in tar: file_size = vhd.size LOG.debug("file_name:file_size is %(n)s:%(s)d", {'n': vhd.name, 's': vhd.size}) vhd_file = tar.extractfile(vhd) vhd_file_parser = vhd_utils.VHDFileParser(vhd_file) vhd_footer = vhd_file_parser.parse_vhd_footer() virtual_size = vhd_footer.current_size sr_ref, vdi_ref = self._createVDI(self.session, self.instance, virtual_size) self._vhd_stream_to_vdi(vhd_file_parser, vdi_ref, file_size) vdi_uuid = self.session.VDI.get_uuid(vdi_ref) if 'root' in self.vdis.keys(): # we only support single vdi. If 'root' already exists # in the dict, should raise exception. msg = "Only support single VDI; but there are " + \ "multiple VDIs in the image." raise exception.InvalidImage(details=msg) self.vdis['root'] = dict(uuid=vdi_uuid) finally: self._clean() def _createVDI(self, session, instance, virtual_size): sr_ref = utils.get_default_sr(session) vdi_ref = utils.create_vdi(session, sr_ref, instance, instance['name'], 'root', virtual_size) vdi_uuid = session.VDI.get_uuid(vdi_ref) LOG.debug("Created a new VDI: uuid=%s" % vdi_uuid) return sr_ref, vdi_ref def _vhd_stream_to_vdi(self, vhd_file_parser, vdi_ref, file_size): headers = {'Content-Type': 'application/octet-stream', 'Content-Length': '%s' % file_size} if self.host_url.scheme == 'http': conn = httplib.HTTPConnection(self.host_url.netloc) elif self.host_url.scheme == 'https': conn = httplib.HTTPSConnection(self.host_url.netloc) vdi_import_path = utils.get_vdi_import_path( self.session, self.task_ref, vdi_ref) try: conn.connect() except Exception: LOG.error('Failed connecting to host: %s', self.host_url.netloc) raise exception.HostConnectionFailure( host_netloc=self.host_url.netloc) try: conn.request('PUT', vdi_import_path, headers=headers) # Send the data already processed by vhd file parser firstly; # then send the remaining data from the stream. conn.send(vhd_file_parser.cached_buff) remain_size = file_size - len(vhd_file_parser.cached_buff) file_obj = vhd_file_parser.src_file while remain_size >= CHUNK_SIZE: chunk = file_obj.read(CHUNK_SIZE) remain_size -= CHUNK_SIZE conn.send(chunk) if remain_size != 0: chunk = file_obj.read(remain_size) conn.send(chunk) except Exception: LOG.error('Failed importing VDI from VHD stream - vdi_ref:%s', vdi_ref) raise exception.VdiImportFailure(vdi_ref=vdi_ref) finally: resp = conn.getresponse() LOG.debug("Connection response status/reason is " "%(status)s:%(reason)s", {'status': resp.status, 'reason': resp.reason}) conn.close() class GenerateImageStream(object): def __init__(self, context, session, instance, host_url, vdi_uuids): self.context = context self.session = session self.instance = instance self.host_url = host_url self.vdi_uuids = vdi_uuids def get_image_data(self): """This function will: 1). export VDI as VHD stream; 2). make gzipped tarball from the VHD stream; 3). read from the tarball stream.and return the iterable data. """ tarpipe_out, tarpipe_in = utils.create_pipe() pool = eventlet.GreenPool() pool.spawn(self.start_image_stream_generator, tarpipe_in) try: while True: data = tarpipe_out.read(CHUNK_SIZE) if not data: break yield data except Exception: LOG.debug("Failed to read chunks from the tarfile " "stream.") raise finally: tarpipe_out.close() pool.waitall() def start_image_stream_generator(self, tarpipe_in): tar_generator = VdisToTarStream( self.context, self.session, self.instance, self.host_url, self.vdi_uuids, tarpipe_in) try: tar_generator.start() finally: tarpipe_in.close() class VdisToTarStream(object): def __init__(self, context, session, instance, host_url, vdi_uuids, tarpipe_in): self.context = context self.session = session self.instance = instance self.host_url = host_url self.vdi_uuids = vdi_uuids self.tarpipe_in = tarpipe_in self.conn = None self.task_ref = None def start(self): # Start thread to generate tgz and write tgz data into tarpipe_in. with tarfile.open(fileobj=self.tarpipe_in, mode='w|gz') as tar_file: # only need export the leaf vdi. vdi_uuid = self.vdi_uuids[0] vdi_ref = self.session.VDI.get_by_uuid(vdi_uuid) vhd_stream = self._connect_request(vdi_ref) tar_info = tarfile.TarInfo('0.vhd') try: # the VHD must be dynamical hard disk, otherwise it will raise # VhdDiskTypeNotSupported exception when parsing VDH file. vhd_DynDisk = vhd_utils.VHDDynDiskParser(vhd_stream) tar_info.size = vhd_DynDisk.get_vhd_file_size() LOG.debug("VHD size for tarfile is %d" % tar_info.size) vhdpipe_out, vhdpipe_in = utils.create_pipe() pool = eventlet.GreenPool() pool.spawn(self.convert_vhd_to_tar, vhdpipe_out, tar_file, tar_info) try: self._vhd_to_pipe(vhd_DynDisk, vhdpipe_in) finally: vhdpipe_in.close() pool.waitall() finally: self._clean() def convert_vhd_to_tar(self, vhdpipe_out, tar_file, tar_info): tarGenerator = AddVhdToTar(tar_file, tar_info, vhdpipe_out) try: tarGenerator.start() finally: vhdpipe_out.close() def _connect_request(self, vdi_ref): # request connection to xapi url service for VDI export try: # create task for VDI export label = 'VDI_EXPORT_for_' + self.instance['name'] desc = 'Exporting VDI for instance: %s' % self.instance['name'] self.task_ref = self.session.task.create(label, desc) LOG.debug("task_ref is %s" % self.task_ref) # connect to XS xs_url = urlparse.urlparse(self.host_url) if xs_url.scheme == 'http': conn = httplib.HTTPConnection(xs_url.netloc) LOG.debug("using http") elif xs_url.scheme == 'https': conn = httplib.HTTPSConnection(xs_url.netloc) LOG.debug("using https") vdi_export_path = utils.get_vdi_export_path( self.session, self.task_ref, vdi_ref) conn.request('GET', vdi_export_path) conn_resp = conn.getresponse() except Exception: LOG.debug('request connect for vdi export failed') raise return conn_resp def _vhd_to_pipe(self, vhd_dynDisk, vhdpipe_in): # Firstly write the data already parsed by vhd_dynDisk obj; # then write all of the remaining data to the pipe also. vhdpipe_in.write(vhd_dynDisk.cached_buff) remain_data = vhd_dynDisk.src_file while True: data = remain_data.read(CHUNK_SIZE) if not data: break try: vhdpipe_in.write(data) except Exception: LOG.debug("Failed when writing data to VHD stream.") raise def _clean(self): if self.conn: self.conn.close() if self.task_ref: self.session.task.destroy(self.task_ref) class AddVhdToTar(object): def __init__(self, tar_file, tar_info, vhdpipe_out): self.tar_file = tar_file self.tar_info = tar_info self.stream = vhdpipe_out def start(self): self._add_stream_to_tar() def _add_stream_to_tar(self): try: LOG.debug('self.tar_info.size=%d' % self.tar_info.size) self.tar_file.addfile(self.tar_info, fileobj=self.stream) LOG.debug('added file %s' % self.tar_info.name) except IOError: LOG.debug('IOError when streaming vhd to tarball') raise os-xenapi-0.3.1/os_xenapi/tests/0000775000175000017500000000000013160424745017701 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/__init__.py0000664000175000017500000000000013160424533021773 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/client/0000775000175000017500000000000013160424745021157 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/client/__init__.py0000664000175000017500000000000013160424533023251 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/client/test_utils.py0000664000175000017500000001121313160424533023721 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from eventlet import greenio import os from os_xenapi.client import exception from os_xenapi.client import utils from os_xenapi.tests import base class UtilsTestCase(base.TestCase): def setUp(self): super(UtilsTestCase, self).setUp() self.session = mock.Mock() def test_get_default_sr(self): FAKE_POOL_REF = 'fake-pool-ref' FAKE_SR_REF = 'fake-sr-ref' pool = self.session.pool pool.get_all.return_value = [FAKE_POOL_REF] pool.get_default_SR.return_value = FAKE_SR_REF default_sr_ref = utils.get_default_sr(self.session) pool.get_all.assert_called_once_with() pool.get_default_SR.assert_called_once_with(FAKE_POOL_REF) self.assertEqual(default_sr_ref, FAKE_SR_REF) def test_get_default_sr_except(self): FAKE_POOL_REF = 'fake-pool-ref' FAKE_SR_REF = None mock_pool = self.session.pool mock_pool.get_all.return_value = [FAKE_POOL_REF] mock_pool.get_default_SR.return_value = FAKE_SR_REF self.assertRaises(exception.NotFound, utils.get_default_sr, self.session) def test_create_vdi(self): mock_create = self.session.VDI.create mock_create.return_value = 'fake-vdi-ref' fake_instance = {'uuid': 'fake-uuid'} expect_other_conf = {'nova_disk_type': 'fake-disk-type', 'nova_instance_uuid': 'fake-uuid'} fake_virtual_size = 1 create_param = { 'name_label': 'fake-name-label', 'name_description': '', 'SR': 'fake-sr-ref', 'virtual_size': str(fake_virtual_size), 'type': 'User', 'sharable': False, 'read_only': False, 'xenstore_data': {}, 'other_config': expect_other_conf, 'sm_config': {}, 'tags': [], } vdi_ref = utils.create_vdi(self.session, 'fake-sr-ref', fake_instance, 'fake-name-label', 'fake-disk-type', fake_virtual_size) self.session.VDI.create.assert_called_once_with(create_param) self.assertEqual(vdi_ref, 'fake-vdi-ref') @mock.patch.object(os, 'pipe') @mock.patch.object(greenio, 'GreenPipe') def test_create_pipe(self, mock_green_pipe, mock_pipe): mock_pipe.return_value = ('fake-rpipe', 'fake-wpipe') mock_green_pipe.side_effect = ['fake-rfile', 'fake-wfile'] rfile, wfile = utils.create_pipe() mock_pipe.assert_called_once_with() real_calls = mock_green_pipe.call_args_list expect_calls = [mock.call('fake-rpipe', 'rb', 0), mock.call('fake-wpipe', 'wb', 0)] self.assertEqual(expect_calls, real_calls) self.assertEqual('fake-rfile', rfile) self.assertEqual('fake-wfile', wfile) def test_get_vdi_import_path(self): self.session.get_session_id.return_value = 'fake-id' task_ref = 'fake-task-ref' vdi_ref = 'fake-vdi-ref' expected_path = '/import_raw_vdi?session_id=fake-id&' expected_path += 'task_id=fake-task-ref&vdi=fake-vdi-ref&format=vhd' export_path = utils.get_vdi_import_path(self.session, task_ref, vdi_ref) self.session.get_session_id.assert_called_once_with() self.assertEqual(expected_path, export_path) def test_get_vdi_export_path(self): self.session.get_session_id.return_value = 'fake-id' task_ref = 'fake-task-ref' vdi_ref = 'fake-vdi-ref' expected_path = '/export_raw_vdi?session_id=fake-id&' expected_path += 'task_id=fake-task-ref&vdi=fake-vdi-ref&format=vhd' export_path = utils.get_vdi_export_path(self.session, task_ref, vdi_ref) self.session.get_session_id.assert_called_once_with() self.assertEqual(expected_path, export_path) os-xenapi-0.3.1/os_xenapi/tests/client/test_session.py0000664000175000017500000006167413160424533024264 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import os import socket import mock from os_xenapi.client import exception from os_xenapi.client import session from os_xenapi.client import XenAPI from os_xenapi.tests import base class SessionTestCase(base.TestCase): @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, '_get_platform_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(session.XenAPISession, '_get_product_version_and_brand') @mock.patch.object(socket, 'gethostbyname') def test_session_nova_originator(self, mock_gethostbyname, mock_version_and_brand, mock_create_session, mock_platform_version, mock_verify_plugin_version): concurrent = 2 originator = 'os-xenapi-nova' version = '2.1' timeout = 10 sess = mock.Mock() mock_create_session.return_value = sess mock_version_and_brand.return_value = ('6.5', 'XenServer') mock_platform_version.return_value = (2, 1, 0) sess.xenapi.host.get_uuid.return_value = 'fake_host_uuid' sess.xenapi.session.get_this_host.return_value = 'fake_host_ref' fake_url = 'http://someserver' fake_host_name = 'someserver' xenapi_sess = session.XenAPISession(fake_url, 'username', 'password', originator=originator, concurrent=concurrent, timeout=timeout) sess.login_with_password.assert_called_with('username', 'password', version, originator) self.assertFalse(xenapi_sess.is_slave) mock_gethostbyname.assert_called_with(fake_host_name) sess.xenapi.session.get_this_host.assert_called_once_with(sess.handle) sess.xenapi.PIF.get_all_records_where.assert_not_called() self.assertEqual('fake_host_ref', xenapi_sess.host_ref) self.assertEqual('fake_host_uuid', xenapi_sess.host_uuid) self.assertEqual(fake_url, xenapi_sess.url) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, '_get_platform_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(session.XenAPISession, '_get_product_version_and_brand') @mock.patch.object(session.XenAPISession, '_create_session_and_login') @mock.patch.object(socket, 'gethostbyname') def test_session_on_slave_node_using_host_ip(self, mock_gethostbyname, mock_login, mock_version_and_brand, mock_create_session, mock_platform_version, mock_verify_plugin_version): sess = mock.Mock() fake_records = {'fake_PIF_ref': {'host': 'fake_host_ref'}} sess.xenapi.PIF.get_all_records_where.return_value = fake_records sess.xenapi.host.get_uuid.return_value = 'fake_host_uuid' side_effects = [XenAPI.Failure(['HOST_IS_SLAVE', 'fake_master_url']), sess, sess, sess] mock_login.side_effect = side_effects concurrent = 2 originator = 'os-xenapi-nova' timeout = 10 mock_version_and_brand.return_value = ('6.5', 'XenServer') mock_platform_version.return_value = (2, 1, 0) fake_url = 'http://0.0.0.0' fake_ip = '0.0.0.0' xenapi_sess = session.XenAPISession(fake_url, 'username', 'password', originator=originator, concurrent=concurrent, timeout=timeout) self.assertTrue(xenapi_sess.is_slave) mock_gethostbyname.assert_called_with(fake_ip) self.assertEqual('fake_host_ref', xenapi_sess.host_ref) self.assertEqual('fake_host_uuid', xenapi_sess.host_uuid) self.assertEqual('http://fake_master_url', xenapi_sess.master_url) self.assertEqual(fake_url, xenapi_sess.url) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, '_get_platform_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(session.XenAPISession, '_get_product_version_and_brand') @mock.patch.object(session.XenAPISession, '_create_session_and_login') @mock.patch.object(socket, 'gethostbyname') def test_session_on_slave_node_using_host_name(self, mock_gethostbyname, mock_login, mock_version_and_brand, mock_create_session, mock_platform_version, mock_verify_plugin_version): sess = mock.Mock() fake_records = {'fake_PIF_ref': {'host': 'fake_host_ref'}} sess.xenapi.PIF.get_all_records_where.return_value = fake_records sess.xenapi.host.get_uuid.return_value = 'fake_host_uuid' side_effects = [XenAPI.Failure(['HOST_IS_SLAVE', 'fake_master_url']), sess, sess, sess] mock_login.side_effect = side_effects concurrent = 2 originator = 'os-xenapi-nova' timeout = 10 mock_version_and_brand.return_value = ('6.5', 'XenServer') mock_platform_version.return_value = (2, 1, 0) fake_url = 'http://someserver' fake_host_name = 'someserver' fake_ip = '0.0.0.0' mock_gethostbyname.return_value = fake_ip xenapi_sess = session.XenAPISession(fake_url, 'username', 'password', originator=originator, concurrent=concurrent, timeout=timeout) self.assertTrue(xenapi_sess.is_slave) mock_gethostbyname.assert_called_with(fake_host_name) self.assertEqual('fake_host_ref', xenapi_sess.host_ref) self.assertEqual('fake_host_uuid', xenapi_sess.host_uuid) self.assertEqual('http://fake_master_url', xenapi_sess.master_url) self.assertEqual(fake_url, xenapi_sess.url) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, '_get_platform_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(session.XenAPISession, '_get_product_version_and_brand') @mock.patch.object(session.XenAPISession, '_create_session_and_login') @mock.patch.object(socket, 'gethostbyname') def test_session_on_slave_node_exc_no_host_ref(self, mock_gethostbyname, mock_login, mock_version_and_brand, mock_create_session, mock_platform_version, mock_verify_plugin_version): sess = mock.Mock() fake_records = {} sess.xenapi.PIF.get_all_records_where.return_value = fake_records sess.xenapi.host.get_uuid.return_value = 'fake_host_uuid' side_effects = [XenAPI.Failure(['HOST_IS_SLAVE', 'fake_master_url']), sess, sess, sess] mock_login.side_effect = side_effects concurrent = 2 originator = 'os-xenapi-nova' timeout = 10 mock_version_and_brand.return_value = ('6.5', 'XenServer') mock_platform_version.return_value = (2, 1, 0) fake_url = 'http://someserver' fake_host_name = 'someserver' fake_ip = '0.0.0.0' mock_gethostbyname.return_value = fake_ip self.assertRaises( XenAPI.Failure, session.XenAPISession, fake_url, 'username', 'password', originator=originator, concurrent=concurrent, timeout=timeout) mock_gethostbyname.assert_called_with(fake_host_name) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, '_get_platform_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(session.XenAPISession, '_get_product_version_and_brand') @mock.patch.object(session.XenAPISession, '_create_session_and_login') @mock.patch.object(socket, 'gethostbyname') def test_session_on_slave_node_exc_more_than_one_host_ref( self, mock_gethostbyname, mock_login, mock_version_and_brand, mock_create_session, mock_platform_version, mock_verify_plugin_version): sess = mock.Mock() fake_records = {'fake_PIF_ref_a': {'host': 'fake_host_ref_a'}, 'fake_PIF_ref_b': {'host': 'fake_host_ref_b'}} sess.xenapi.PIF.get_all_records_where.return_value = fake_records sess.xenapi.host.get_uuid.return_value = 'fake_host_uuid' side_effects = [XenAPI.Failure(['HOST_IS_SLAVE', 'fake_master_url']), sess, sess, sess] mock_login.side_effect = side_effects concurrent = 2 originator = 'os-xenapi-nova' timeout = 10 mock_version_and_brand.return_value = ('6.5', 'XenServer') mock_platform_version.return_value = (2, 1, 0) fake_url = 'http://someserver' fake_host_name = 'someserver' fake_ip = '0.0.0.0' mock_gethostbyname.return_value = fake_ip self.assertRaises( XenAPI.Failure, session.XenAPISession, fake_url, 'username', 'password', originator=originator, concurrent=concurrent, timeout=timeout) mock_gethostbyname.assert_called_with(fake_host_name) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, '_get_platform_version') @mock.patch('eventlet.timeout.Timeout') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(session.XenAPISession, '_get_product_version_and_brand') @mock.patch.object(socket, 'gethostbyname') @mock.patch.object(session.XenAPISession, '_get_host_ref') def test_session_login_with_timeout(self, mock_get_host_ref, mock_gethostbyname, mock_version, create_session, mock_timeout, mock_platform_version, mock_verify_plugin_version): concurrent = 2 originator = 'os-xenapi-nova' sess = mock.Mock() create_session.return_value = sess mock_version.return_value = ('version', 'brand') mock_platform_version.return_value = (2, 1, 0) session.XenAPISession('http://someserver', 'username', 'password', originator=originator, concurrent=concurrent) self.assertEqual(concurrent, sess.login_with_password.call_count) self.assertEqual(concurrent, mock_timeout.call_count) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, 'call_plugin') @mock.patch.object(session.XenAPISession, '_get_software_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(socket, 'gethostbyname') @mock.patch.object(session.XenAPISession, '_get_host_ref') def test_relax_xsm_sr_check_true(self, mock_get_host_ref, mock_gethostbyname, mock_create_session, mock_get_software_version, mock_call_plugin, mock_verify_plugin_version): sess = mock.Mock() mock_create_session.return_value = sess mock_get_software_version.return_value = {'product_version': '6.5.0', 'product_brand': 'XenServer', 'platform_version': '1.9.0'} # mark relax-xsm-sr-check=True in /etc/xapi.conf mock_call_plugin.return_value = "True" xenapi_sess = session.XenAPISession( 'http://someserver', 'username', 'password') self.assertTrue(xenapi_sess.is_xsm_sr_check_relaxed()) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, 'call_plugin') @mock.patch.object(session.XenAPISession, '_get_software_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(socket, 'gethostbyname') @mock.patch.object(session.XenAPISession, '_get_host_ref') def test_relax_xsm_sr_check_XS65_missing(self, mock_get_host_ref, mock_gethostbyname, mock_create_session, mock_get_software_version, mock_call_plugin, mock_verify_plugin_version): sess = mock.Mock() mock_create_session.return_value = sess mock_get_software_version.return_value = {'product_version': '6.5.0', 'product_brand': 'XenServer', 'platform_version': '1.9.0'} # mark no relax-xsm-sr-check setting in /etc/xapi.conf mock_call_plugin.return_value = "" xenapi_sess = session.XenAPISession( 'http://someserver', 'username', 'password') self.assertFalse(xenapi_sess.is_xsm_sr_check_relaxed()) @mock.patch.object(session.XenAPISession, '_verify_plugin_version') @mock.patch.object(session.XenAPISession, 'call_plugin') @mock.patch.object(session.XenAPISession, '_get_software_version') @mock.patch.object(session.XenAPISession, '_create_session') @mock.patch.object(socket, 'gethostbyname') @mock.patch.object(session.XenAPISession, '_get_host_ref') def test_relax_xsm_sr_check_XS7_missing(self, mock_get_host_ref, mock_gethostbyname, mock_create_session, mock_get_software_version, mock_call_plugin, mock_verify_plugin_version): sess = mock.Mock() mock_create_session.return_value = sess mock_get_software_version.return_value = {'product_version': '7.0.0', 'product_brand': 'XenServer', 'platform_version': '2.1.0'} # mark no relax-xsm-sr-check in /etc/xapi.conf mock_call_plugin.return_value = "" xenapi_sess = session.XenAPISession( 'http://someserver', 'username', 'password') self.assertTrue(xenapi_sess.is_xsm_sr_check_relaxed()) class ApplySessionHelpersTestCase(base.TestCase): def setUp(self): super(ApplySessionHelpersTestCase, self).setUp() self.session = mock.Mock() session.apply_session_helpers(self.session) def test_apply_session_helpers_add_VM(self): self.session.VM.get_X("ref") self.session.call_xenapi.assert_called_once_with("VM.get_X", "ref") def test_apply_session_helpers_add_SR(self): self.session.SR.get_X("ref") self.session.call_xenapi.assert_called_once_with("SR.get_X", "ref") def test_apply_session_helpers_add_VDI(self): self.session.VDI.get_X("ref") self.session.call_xenapi.assert_called_once_with("VDI.get_X", "ref") def test_apply_session_helpers_add_VIF(self): self.session.VIF.get_X("ref") self.session.call_xenapi.assert_called_once_with("VIF.get_X", "ref") def test_apply_session_helpers_add_VBD(self): self.session.VBD.get_X("ref") self.session.call_xenapi.assert_called_once_with("VBD.get_X", "ref") def test_apply_session_helpers_add_PBD(self): self.session.PBD.get_X("ref") self.session.call_xenapi.assert_called_once_with("PBD.get_X", "ref") def test_apply_session_helpers_add_PIF(self): self.session.PIF.get_X("ref") self.session.call_xenapi.assert_called_once_with("PIF.get_X", "ref") def test_apply_session_helpers_add_VLAN(self): self.session.VLAN.get_X("ref") self.session.call_xenapi.assert_called_once_with("VLAN.get_X", "ref") def test_apply_session_helpers_add_host(self): self.session.host.get_X("ref") self.session.call_xenapi.assert_called_once_with("host.get_X", "ref") def test_apply_session_helpers_add_network(self): self.session.network.get_X("ref") self.session.call_xenapi.assert_called_once_with("network.get_X", "ref") class CallPluginTestCase(base.TestCase): def _get_fake_xapisession(self): class FakeXapiSession(session.XenAPISession): def __init__(self, **kwargs): "Skip the superclass's dirty init" self.XenAPI = mock.MagicMock() return FakeXapiSession() def setUp(self): super(CallPluginTestCase, self).setUp() self.session = self._get_fake_xapisession() def test_serialized_with_retry_socket_error_conn_reset(self): exc = socket.error() exc.errno = errno.ECONNRESET plugin = 'glance' fn = 'download_vhd' num_retries = 1 callback = None retry_cb = mock.Mock() with mock.patch.object(self.session, 'call_plugin_serialized', spec=True) as call_plugin_serialized: call_plugin_serialized.side_effect = exc self.assertRaises( exception.PluginRetriesExceeded, self.session.call_plugin_serialized_with_retry, plugin, fn, num_retries, callback, retry_cb) call_plugin_serialized.assert_called_with(plugin, fn) self.assertEqual(2, call_plugin_serialized.call_count) self.assertEqual(2, retry_cb.call_count) def test_serialized_with_retry_socket_error_reraised(self): exc = socket.error() exc.errno = errno.ECONNREFUSED plugin = 'glance' fn = 'download_vhd' num_retries = 1 callback = None retry_cb = mock.Mock() with mock.patch.object( self.session, 'call_plugin_serialized', spec=True)\ as call_plugin_serialized: call_plugin_serialized.side_effect = exc self.assertRaises( socket.error, self.session.call_plugin_serialized_with_retry, plugin, fn, num_retries, callback, retry_cb) call_plugin_serialized.assert_called_once_with(plugin, fn) self.assertEqual(0, retry_cb.call_count) def test_serialized_with_retry_socket_reset_reraised(self): exc = socket.error() exc.errno = errno.ECONNRESET plugin = 'glance' fn = 'download_vhd' num_retries = 1 callback = None retry_cb = mock.Mock() with mock.patch.object(self.session, 'call_plugin_serialized', spec=True) as call_plugin_serialized: call_plugin_serialized.side_effect = exc self.assertRaises( exception.PluginRetriesExceeded, self.session.call_plugin_serialized_with_retry, plugin, fn, num_retries, callback, retry_cb) call_plugin_serialized.assert_called_with(plugin, fn) self.assertEqual(2, call_plugin_serialized.call_count) class XenAPISessionTestCase(base.TestCase): def _get_mock_xapisession(self, software_version): class MockXapiSession(session.XenAPISession): def __init__(_ignore): pass def _get_software_version(_ignore): return software_version return MockXapiSession() @mock.patch.object(XenAPI, 'xapi_local') def test_local_session(self, mock_xapi_local): session = self._get_mock_xapisession({}) session.is_local_connection = True mock_xapi_local.return_value = "local_connection" self.assertEqual("local_connection", session._create_session("unix://local")) @mock.patch.object(XenAPI, 'Session') def test_remote_session(self, mock_session): session = self._get_mock_xapisession({}) session.is_local_connection = False mock_session.return_value = "remote_connection" self.assertEqual("remote_connection", session._create_session("url")) def test_get_product_version_product_brand_does_not_fail(self): session = self._get_mock_xapisession( {'build_number': '0', 'date': '2012-08-03', 'hostname': 'komainu', 'linux': '3.2.0-27-generic', 'network_backend': 'bridge', 'platform_name': 'XCP_Kronos', 'platform_version': '1.6.0', 'xapi': '1.3', 'xen': '4.1.2', 'xencenter_max': '1.10', 'xencenter_min': '1.10'}) self.assertEqual( ((1, 6, 0), None), session._get_product_version_and_brand() ) def test_get_product_version_product_brand_xs_6(self): session = self._get_mock_xapisession( {'product_brand': 'XenServer', 'product_version': '6.0.50', 'platform_version': '0.0.1'}) self.assertEqual( ((6, 0, 50), 'XenServer'), session._get_product_version_and_brand() ) def test_verify_plugin_version_same(self): session = self._get_mock_xapisession({}) session.PLUGIN_REQUIRED_VERSION = '2.4' with mock.patch.object(session, 'call_plugin_serialized', spec=True) as call_plugin_serialized: call_plugin_serialized.return_value = "2.4" session._verify_plugin_version() def test_verify_plugin_version_compatible(self): session = self._get_mock_xapisession({}) session.PLUGIN_REQUIRED_VERSION = '2.4' with mock.patch.object(session, 'call_plugin_serialized', spec=True) as call_plugin_serialized: call_plugin_serialized.return_value = "2.5" session._verify_plugin_version() def test_verify_plugin_version_bad_maj(self): session = self._get_mock_xapisession({}) session.PLUGIN_REQUIRED_VERSION = '2.4' with mock.patch.object(session, 'call_plugin_serialized', spec=True) as call_plugin_serialized: call_plugin_serialized.return_value = "3.0" self.assertRaises(XenAPI.Failure, session._verify_plugin_version) def test_verify_plugin_version_bad_min(self): session = self._get_mock_xapisession({}) session.PLUGIN_REQUIRED_VERSION = '2.4' with mock.patch.object(session, 'call_plugin_serialized', spec=True) as call_plugin_serialized: call_plugin_serialized.return_value = "2.3" self.assertRaises(XenAPI.Failure, session._verify_plugin_version) def test_verify_current_version_matches(self): session = self._get_mock_xapisession({}) # Import the plugin to extract its version path = os.path.dirname(__file__) rel_path_elem = "../../dom0/etc/xapi.d/plugins/dom0_plugin_version.py" for elem in rel_path_elem.split('/'): path = os.path.join(path, elem) path = os.path.realpath(path) plugin_version = None with open(path) as plugin_file: for line in plugin_file: if "PLUGIN_VERSION = " in line: plugin_version = line.strip()[17:].strip('"') self.assertEqual(session.PLUGIN_REQUIRED_VERSION, plugin_version) os-xenapi-0.3.1/os_xenapi/tests/client/test_objects.py0000664000175000017500000001041313160424533024213 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_os_xenapi ---------------------------------- Tests for `os_xenapi objects` module. """ import mock from os_xenapi.client import objects from os_xenapi.tests import base class XenAPISessionObjectTestCase(base.TestCase): def setUp(self): super(XenAPISessionObjectTestCase, self).setUp() self.session = mock.Mock() self.obj = objects.XenAPISessionObject(self.session, "FAKE") def test_call_method_via_attr(self): self.session.call_xenapi.return_value = "asdf" result = self.obj.get_X("ref") self.assertEqual(result, "asdf") self.session.call_xenapi.assert_called_once_with("FAKE.get_X", "ref") class ObjectsTestCase(base.TestCase): def setUp(self): super(ObjectsTestCase, self).setUp() self.session = mock.Mock() def test_VM(self): vm = objects.VM(self.session) vm.get_X("ref") self.session.call_xenapi.assert_called_once_with("VM.get_X", "ref") def test_SR(self): sr = objects.SR(self.session) sr.get_X("ref") self.session.call_xenapi.assert_called_once_with("SR.get_X", "ref") def test_VDI(self): vdi = objects.VDI(self.session) vdi.get_X("ref") self.session.call_xenapi.assert_called_once_with("VDI.get_X", "ref") def test_VIF(self): vdi = objects.VIF(self.session) vdi.get_X("ref") self.session.call_xenapi.assert_called_once_with("VIF.get_X", "ref") def test_VBD(self): vbd = objects.VBD(self.session) vbd.get_X("ref") self.session.call_xenapi.assert_called_once_with("VBD.get_X", "ref") def test_PBD(self): pbd = objects.PBD(self.session) pbd.get_X("ref") self.session.call_xenapi.assert_called_once_with("PBD.get_X", "ref") def test_PIF(self): pif = objects.PIF(self.session) pif.get_X("ref") self.session.call_xenapi.assert_called_once_with("PIF.get_X", "ref") def test_VLAN(self): vlan = objects.VLAN(self.session) vlan.get_X("ref") self.session.call_xenapi.assert_called_once_with("VLAN.get_X", "ref") def test_host(self): host = objects.Host(self.session) host.get_X("ref") self.session.call_xenapi.assert_called_once_with("host.get_X", "ref") def test_network(self): network = objects.Network(self.session) network.get_X("ref") self.session.call_xenapi.assert_called_once_with("network.get_X", "ref") def test_pool(self): pool = objects.Pool(self.session) pool.get_X("ref") self.session.call_xenapi.assert_called_once_with("pool.get_X", "ref") class VBDTestCase(base.TestCase): def setUp(self): super(VBDTestCase, self).setUp() self.session = mock.Mock() self.session.VBD = objects.VBD(self.session) self.utils = mock.Mock() def test_plug(self): self.session.VBD.plug("vbd_ref", "vm_ref") self.session.call_xenapi.assert_called_once_with("VBD.plug", "vbd_ref") def test_unplug(self): self.session.VBD.unplug("vbd_ref", "vm_ref") self.session.call_xenapi.assert_called_once_with("VBD.unplug", "vbd_ref") @mock.patch.object(objects, 'synchronized') def test_vbd_plug_check_synchronized(self, mock_synchronized): self.session.VBD.plug("vbd_ref", "vm_ref") mock_synchronized.assert_called_once_with("vbd-vm_ref") @mock.patch.object(objects, 'synchronized') def test_vbd_unplug_check_synchronized(self, mock_synchronized): self.session.VBD.unplug("vbd_ref", "vm_ref") mock_synchronized.assert_called_once_with("vbd-vm_ref") os-xenapi-0.3.1/os_xenapi/tests/client/test_host_glance.py0000664000175000017500000000744313160424533025061 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_xenapi.client import exception from os_xenapi.client import host_glance from os_xenapi.client import XenAPI from os_xenapi.tests import base class HostGlanceTestCase(base.TestCase): def test_upload_vhd(self): session = mock.Mock() num_retries = 'fake_num_retries' callback = 'fake_callback' retry_cb = 'fake_retry_cb' image_id = 'fake_image_id' sr_path = 'fake_sr_path' extra_headers = 'fake_extra_headers' vdi_uuids = 'fake_vdi_uuids' properties = {} args = {'image_id': image_id, 'sr_path': sr_path, 'extra_headers': extra_headers, 'vdi_uuids': vdi_uuids, 'properties': properties} host_glance.upload_vhd(session, num_retries, callback, retry_cb, image_id, sr_path, extra_headers, vdi_uuids, properties) session.call_plugin_serialized_with_retry.assert_called_with( 'glance.py', 'upload_vhd2', num_retries, callback, retry_cb, **args ) def test_upload_vhd_xenapi_failure_image_not_found(self): session = mock.Mock() num_retries = 'fake_num_retries' callback = 'fake_callback' retry_cb = 'fake_retry_cb' image_id = 'fake_image_id' sr_path = 'fake_sr_path' extra_headers = 'fake_extra_headers' vdi_uuids = 'fake_vdi_uuids' properties = {} args = {'image_id': image_id, 'sr_path': sr_path, 'extra_headers': extra_headers, 'vdi_uuids': vdi_uuids, 'properties': properties} session.call_plugin_serialized_with_retry.side_effect = XenAPI.Failure( ('XENAPI_PLUGIN_FAILURE', 'upload_vhd2', 'PluginError', 'ImageNotFound') ) self.assertRaises(exception.PluginImageNotFound, host_glance.upload_vhd, session, num_retries, callback, retry_cb, image_id, sr_path, extra_headers, vdi_uuids, properties) session.call_plugin_serialized_with_retry.assert_called_with( 'glance.py', 'upload_vhd2', num_retries, callback, retry_cb, **args ) def test_upload_vhd_xenapi_failure_reraise(self): session = mock.Mock() num_retries = 'fake_num_retries' callback = 'fake_callback' retry_cb = 'fake_retry_cb' image_id = 'fake_image_id' sr_path = 'fake_sr_path' extra_headers = 'fake_extra_headers' vdi_uuids = 'fake_vdi_uuids' properties = {} args = {'image_id': image_id, 'sr_path': sr_path, 'extra_headers': extra_headers, 'vdi_uuids': vdi_uuids, 'properties': properties} session.call_plugin_serialized_with_retry.side_effect = XenAPI.Failure( ('untouch') ) self.assertRaises(XenAPI.Failure, host_glance.upload_vhd, session, num_retries, callback, retry_cb, image_id, sr_path, extra_headers, vdi_uuids, properties) session.call_plugin_serialized_with_retry.assert_called_with( 'glance.py', 'upload_vhd2', num_retries, callback, retry_cb, **args ) os-xenapi-0.3.1/os_xenapi/tests/client/image/0000775000175000017500000000000013160424745022241 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/client/image/__init__.py0000664000175000017500000000000013160424533024333 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/client/image/test_vhd_utils.py0000664000175000017500000001650113160424533025651 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """This file defines the tests used to cover unit tests for VHD utils. To ensure it's close to the real VHD file parser, strongly suggest to use the data from a real VHD file for the fake bytes to feed the unit tests. Initially the fake data for the tests is from the VHD file exported from the VM which booted from the default devstack image: cirros-0.3.5-x86_64-disk. """ import mock import struct from os_xenapi.client import exception as xenapi_except from os_xenapi.client.image import vhd_utils from os_xenapi.tests import base class VhdUtilsTestCase(base.TestCase): def test_VHDFooter(self): ONE_GB = 1 * 1024 * 1024 * 1024 TYPE_DYNAMIC = 3 footer_data = b'\x00' * 48 + struct.pack('!Q', ONE_GB) + \ b'\x00' * 4 + \ b'\x00\x00\x00\x03' vhd_footer = vhd_utils.VHDFooter(footer_data) self.assertEqual(vhd_footer.raw_data, footer_data) self.assertEqual(vhd_footer.current_size, ONE_GB) self.assertEqual(vhd_footer.disk_type, TYPE_DYNAMIC) def test_VHDDynDiskHdr(self): BAT_OFFSET = 2048 MAX_BAT_ENTRIES = 512 SIZE_OF_DATA_BLOCK = 2 * 1024 * 1024 # Construct the DDH(Dynamical Disk Header) fields. DDH_BAT_OFFSET = struct.pack('!Q', BAT_OFFSET) DDH_MAX_BAT_ENTRIES = struct.pack('!I', MAX_BAT_ENTRIES) DDH_BLOCK_SIZE = struct.pack('!I', SIZE_OF_DATA_BLOCK) ddh_data = b'\x00' * 16 + DDH_BAT_OFFSET + \ b'\x00' * 4 + DDH_MAX_BAT_ENTRIES + \ DDH_BLOCK_SIZE vhd_dynDiskHdr = vhd_utils.VHDDynDiskHdr(ddh_data) self.assertEqual(vhd_dynDiskHdr.raw_data, ddh_data) self.assertEqual(vhd_dynDiskHdr.bat_offset, BAT_OFFSET) self.assertEqual(vhd_dynDiskHdr.bat_max_entries, MAX_BAT_ENTRIES) self.assertEqual(vhd_dynDiskHdr.block_size, SIZE_OF_DATA_BLOCK) def test_VHDBlockAllocTable(self): MAX_BAT_ENTRIES = 512 # Construct BAT(Block Allocation Table) # The non 0xffffffff means a valid BAT entry. Let's give some holes. # At here the DATA_BAT contains 14 valid entries in the first 16 # 4-bytes units; there are 2 holes - 0xffffffff which should be # ignored. DATA_BAT = b'\x00\x00\x00\x08\x00\x00\x50\x0d\xff\xff\xff\xff' + \ b'\x00\x00\x10\x09\x00\x00\x20\x0a\x00\x00\x30\x0b' + \ b'\x00\x00\x40\x0c\xff\xff\xff\xff\x00\x00\x60\x0e' + \ b'\x00\x00\x70\x0f\x00\x00\x80\x10\x00\x00\x90\x11' + \ b'\x00\x00\xa0\x12\x00\x00\xb0\x13\x00\x00\xc0\x14' + \ b'\x00\x00\xd0\x15' + \ b'\xff\xff\xff\xff' * (MAX_BAT_ENTRIES - 16) vhd_blockAllocTable = vhd_utils.VHDBlockAllocTable(DATA_BAT) self.assertEqual(vhd_blockAllocTable.raw_data, DATA_BAT) self.assertEqual(vhd_blockAllocTable.num_valid_bat_entries, 14) class VhdFileParserTestCase(base.TestCase): def test_get_disk_type_name(self): disk_tyep_val = 3 expect_disk_type_name = 'Dynamic hard disk' fake_file = 'fake_file' vhdParser = vhd_utils.VHDFileParser(fake_file) disk_type_name = vhdParser.get_disk_type_name(disk_tyep_val) self.assertEqual(disk_type_name, expect_disk_type_name) def test_get_vhd_file_size(self): vhd_file = mock.Mock() SIZE_OF_FOOTER = 512 SIZE_OF_DDH = 1024 SIZE_PADDING = 512 MAX_BAT_ENTRIES = 512 SIZE_OF_BAT_ENTRY = 4 SIZE_OF_BITMAP = 512 SIZE_OF_DATA_BLOCK = 2 * 1024 * 1024 VIRTUAL_SIZE = 40 * 1024 * 1024 * 1024 # Make fake data for VHD footer. DATA_FOOTER = b'\x00' * 48 + struct.pack('!Q', VIRTUAL_SIZE) # disk type is 3: dynamical disk. DATA_FOOTER += b'\x00' * 4 + b'\x00\x00\x00\x03' # padding bytes padding_len = SIZE_OF_FOOTER - len(DATA_FOOTER) DATA_FOOTER += b'\x00' * padding_len # Construct the DDH(Dynamical Disk Header) fields. DDH_BAT_OFFSET = struct.pack('!Q', 2048) DDH_MAX_BAT_ENTRIES = struct.pack('!I', MAX_BAT_ENTRIES) DDH_BLOCK_SIZE = struct.pack('!I', SIZE_OF_DATA_BLOCK) DATA_DDH = b'\x00' * 16 + DDH_BAT_OFFSET DATA_DDH += b'\x00' * 4 + DDH_MAX_BAT_ENTRIES DATA_DDH += DDH_BLOCK_SIZE # padding bytes for DDH padding_len = SIZE_OF_DDH - len(DATA_DDH) DATA_DDH += b'\x00' * padding_len # Construct the padding bytes before the Block Allocation Table. DATA_PADDING = b'\x00' * SIZE_PADDING # Construct BAT(Block Allocation Table) # The non 0xffffffff means a valid BAT entry. Let's give some holes. # At here the DATA_BAT contains 14 valid entries in the first 16 # 4-bytes units; there are 2 holes - 0xffffffff which should be # ignored. DATA_BAT = b'\x00\x00\x00\x08\x00\x00\x50\x0d\xff\xff\xff\xff' + \ b'\x00\x00\x10\x09\x00\x00\x20\x0a\x00\x00\x30\x0b' + \ b'\x00\x00\x40\x0c\xff\xff\xff\xff\x00\x00\x60\x0e' + \ b'\x00\x00\x70\x0f\x00\x00\x80\x10\x00\x00\x90\x11' + \ b'\x00\x00\xa0\x12\x00\x00\xb0\x13\x00\x00\xc0\x14' + \ b'\x00\x00\xd0\x15' + \ b'\xff\xff\xff\xff' * (MAX_BAT_ENTRIES - 16) expected_size = SIZE_OF_FOOTER * 2 + SIZE_OF_DDH expected_size += SIZE_PADDING + SIZE_OF_BAT_ENTRY * MAX_BAT_ENTRIES expected_size += (SIZE_OF_BITMAP + SIZE_OF_DATA_BLOCK) * 14 vhd_file.read.side_effect = [DATA_FOOTER, DATA_DDH, DATA_PADDING, DATA_BAT] vhd_parser = vhd_utils.VHDDynDiskParser(vhd_file) vhd_size = vhd_parser.get_vhd_file_size() read_call_list = vhd_file.read.call_args_list expected = [mock.call(SIZE_OF_FOOTER), mock.call(SIZE_OF_DDH), mock.call(SIZE_PADDING), mock.call(SIZE_OF_BAT_ENTRY * MAX_BAT_ENTRIES), ] self.assertEqual(expected, read_call_list) self.assertEqual(expected_size, vhd_size) def test_not_dyn_disk_exception(self): # If the VHD's disk type is not dynamic disk, it should raise # exception. SIZE_OF_FOOTER = 512 vhd_file = mock.Mock() # disk type is 2: fixed disk. DATA_FOOTER = b'\x00' * 60 + b'\x00\x00\x00\x02' # padding bytes padding_len = SIZE_OF_FOOTER - len(DATA_FOOTER) DATA_FOOTER += b'\x00' * padding_len vhd_file.read.return_value = DATA_FOOTER self.assertRaises(xenapi_except.VhdDiskTypeNotSupported, vhd_utils.VHDDynDiskParser, vhd_file) os-xenapi-0.3.1/os_xenapi/tests/client/image/test_init.py0000664000175000017500000000312613160424533024612 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_xenapi.client import image from os_xenapi.client.image import vdi_handler from os_xenapi.tests import base class ImageTestCase(base.TestCase): def setUp(self): super(ImageTestCase, self).setUp() self.context = mock.Mock() self.session = mock.Mock() self.instance = {'name': 'instance-001'} self.host_url = "http://fake-host.com" self.stream = mock.Mock() @mock.patch.object(vdi_handler.ImageStreamToVDIs, 'start') def test_stream_to_vdis(self, mock_start): image.stream_to_vdis(self.context, self.session, self.instance, self.host_url, self.stream) mock_start.assert_called_once_with() @mock.patch.object(vdi_handler.GenerateImageStream, 'get_image_data') def test_vdis_to_stream(self, mock_get): image.stream_from_vdis(self.context, self.session, self.instance, self.host_url, ['fake-uuid']) mock_get.assert_called_once_with() os-xenapi-0.3.1/os_xenapi/tests/client/image/test_vdi_handler.py0000664000175000017500000003404613160424533026133 0ustar jenkinsjenkins00000000000000# Copyright 2017 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import eventlet from six.moves import http_client as httplib import tarfile from os_xenapi.client import exception from os_xenapi.client.image import vdi_handler from os_xenapi.client.image import vhd_utils from os_xenapi.client import utils from os_xenapi.tests import base class ImageStreamToVDIsTestCase(base.TestCase): def setUp(self): super(ImageStreamToVDIsTestCase, self).setUp() self.context = mock.Mock() self.session = mock.Mock() self.instance = {'name': 'instance-001'} self.host_url = "http://fake-host.com" self.stream = mock.Mock() @mock.patch.object(tarfile, 'open') @mock.patch.object(vhd_utils, 'VHDFileParser') @mock.patch.object(vdi_handler.ImageStreamToVDIs, '_createVDI', return_value=('fake_sr_ref', 'fake_vdi_ref')) @mock.patch.object(vdi_handler.ImageStreamToVDIs, '_vhd_stream_to_vdi') def test_start(self, mock_to_vdi, mock_createVDI, mock_get_parser, mock_open): self.session.task.create.return_value = 'fake-task-ref' mock_footer = mock.Mock(current_size=1073741824) mock_parser = mock.Mock() mock_get_parser.return_value = mock_parser mock_parser.parse_vhd_footer.return_value = mock_footer fake_vhd_info = mock.Mock() fake_vhd_info.size = 29371904 fake_vhd_info.name = '0.vhd' mock_tarfile = mock.MagicMock() mock_tarfile.__enter__.return_value = mock_tarfile mock_tarfile.__iter__.return_value = [fake_vhd_info] mock_open.return_value = mock_tarfile mock_tarfile.extractfile.return_value = 'fake-file-obj' image_cmd = vdi_handler.ImageStreamToVDIs(self.context, self.session, self.instance, self.host_url, self.stream) image_cmd.start() self.session.task.create.assert_called_once_with( 'VDI_IMPORT_for_instance-001', 'Importing VDI for instance: instance-001') mock_open.assert_called_once_with(mode="r|gz", fileobj=self.stream) mock_tarfile.extractfile.assert_called_once_with(fake_vhd_info) mock_createVDI.assert_called_once_with(self.session, self.instance, 1073741824) mock_to_vdi.assert_called_once_with(mock_parser, 'fake_vdi_ref', 29371904) self.session.VDI.get_uuid.assert_called_once_with('fake_vdi_ref') @mock.patch.object(utils, 'get_default_sr', return_value='fake-sr-ref') @mock.patch.object(utils, 'create_vdi', return_value='fake-vdi-ref') def test_createVDI(self, mock_create_vdi, mock_get_sr): virtual_size = 1073741824 image_cmd = vdi_handler.ImageStreamToVDIs(self.context, self.session, self.instance, self.host_url, self.stream) expect_result = ('fake-sr-ref', 'fake-vdi-ref') result = image_cmd._createVDI(self.session, self.instance, virtual_size) mock_get_sr.assert_called_once_with(self.session) mock_create_vdi.assert_called_once_with(self.session, 'fake-sr-ref', self.instance, 'instance-001', 'root', virtual_size) self.session.VDI.get_uuid.assert_called_once_with('fake-vdi-ref') self.assertEqual(expect_result, result) @mock.patch.object(utils, 'get_vdi_import_path', return_value='fake-path') @mock.patch.object(httplib.HTTPConnection, 'connect') @mock.patch.object(httplib.HTTPConnection, 'request') @mock.patch.object(httplib.HTTPConnection, 'send') @mock.patch.object(httplib.HTTPConnection, 'getresponse') @mock.patch.object(httplib.HTTPConnection, 'close') def test_vhd_stream_to_vdi(self, conn_close, conn_getRes, conn_send, conn_req, conn_connect, get_path): vdh_stream = mock.Mock() cache_size = 4 * 1024 remain_size = vdi_handler.CHUNK_SIZE / 2 file_size = cache_size + vdi_handler.CHUNK_SIZE * 2 + remain_size headers = {'Content-Type': 'application/octet-stream', 'Content-Length': '%s' % file_size} image_cmd = vdi_handler.ImageStreamToVDIs(self.context, self.session, self.instance, self.host_url, self.stream) mock_parser = mock.Mock() mock_parser.cached_buff = b'\x00' * cache_size mock_parser.src_file = vdh_stream image_cmd.task_ref = 'fake-task-ref' vdh_stream.read.side_effect = ['chunk1', 'chunk2', 'chunk3'] image_cmd._vhd_stream_to_vdi(mock_parser, 'fake_vdi_ref', file_size) conn_connect.assert_called_once_with() get_path.assert_called_once_with(self.session, 'fake-task-ref', 'fake_vdi_ref') conn_connect.assert_called_once_with() conn_req.assert_called_once_with('PUT', 'fake-path', headers=headers) expect_send_calls = [mock.call(mock_parser.cached_buff), mock.call('chunk1'), mock.call('chunk2'), mock.call('chunk3'), ] conn_send.assert_has_calls(expect_send_calls) conn_getRes.assert_called_once_with() conn_close.assert_called_once_with() @mock.patch.object(utils, 'get_vdi_import_path', return_value='fake-path') @mock.patch.object(httplib.HTTPConnection, 'connect') @mock.patch.object(httplib.HTTPConnection, 'request', side_effect=Exception) @mock.patch.object(httplib.HTTPConnection, 'send') @mock.patch.object(httplib.HTTPConnection, 'getresponse') @mock.patch.object(httplib.HTTPConnection, 'close') def test_vhd_stream_to_vdi_put_except(self, conn_close, conn_getRes, conn_send, conn_req, conn_connect, get_path): vdh_stream = mock.Mock() cache_size = 4 * 1024 remain_size = vdi_handler.CHUNK_SIZE / 2 file_size = cache_size + vdi_handler.CHUNK_SIZE * 2 + remain_size image_cmd = vdi_handler.ImageStreamToVDIs(self.context, self.session, self.instance, self.host_url, self.stream) mock_parser = mock.Mock() mock_parser.cached_buff = b'\x00' * cache_size mock_parser.src_file = vdh_stream image_cmd.task_ref = 'fake-task-ref' vdh_stream.read.return_value = ['chunk1', 'chunk2', 'chunk3'] self.assertRaises(exception.VdiImportFailure, image_cmd._vhd_stream_to_vdi, mock_parser, 'fake_vdi_ref', file_size) @mock.patch.object(utils, 'get_vdi_import_path', return_value='fake-path') @mock.patch.object(httplib.HTTPConnection, 'connect', side_effect=Exception) @mock.patch.object(httplib.HTTPConnection, 'request') @mock.patch.object(httplib.HTTPConnection, 'send') @mock.patch.object(httplib.HTTPConnection, 'getresponse') @mock.patch.object(httplib.HTTPConnection, 'close') def test_vhd_stream_to_vdi_conn_except(self, conn_close, conn_getRes, conn_send, conn_req, conn_connect, get_path): vdh_stream = mock.Mock() cache_size = 4 * 1024 remain_size = vdi_handler.CHUNK_SIZE / 2 file_size = cache_size + vdi_handler.CHUNK_SIZE * 2 + remain_size image_cmd = vdi_handler.ImageStreamToVDIs(self.context, self.session, self.instance, self.host_url, self.stream) mock_parser = mock.Mock() mock_parser.cached_buff = b'\x00' * cache_size mock_parser.src_file = vdh_stream image_cmd.task_ref = 'fake-task-ref' vdh_stream.read.return_value = ['chunk1', 'chunk2', 'chunk3'] self.assertRaises(exception.HostConnectionFailure, image_cmd._vhd_stream_to_vdi, mock_parser, 'fake_vdi_ref', file_size) class GenerateImageStreamTestCase(base.TestCase): def setUp(self): super(GenerateImageStreamTestCase, self).setUp() self.context = mock.Mock() self.session = mock.Mock() self.instance = {'name': 'instance-001'} self.host_url = "http://fake-host.com" self.stream = mock.Mock() @mock.patch.object(utils, 'create_pipe') @mock.patch.object(eventlet.GreenPool, 'spawn') @mock.patch.object(vdi_handler.GenerateImageStream, 'start_image_stream_generator') @mock.patch.object(eventlet.GreenPool, 'waitall') def test_get_image_data(self, mock_waitall, mock_start, mock_spawn, create_pipe): mock_tarpipe_out = mock.Mock() mock_tarpipe_in = mock.Mock() create_pipe.return_value = (mock_tarpipe_out, mock_tarpipe_in) image_cmd = vdi_handler.GenerateImageStream( self.context, self.session, self.instance, self.host_url, ['vdi_uuid']) mock_tarpipe_out.read.side_effect = ['chunk1', 'chunk2', ''] image_chunks = [] for chunk in image_cmd.get_image_data(): image_chunks.append(chunk) create_pipe.assert_called_once_with() mock_spawn.assert_called_once_with(mock_start, mock_tarpipe_in) self.assertEqual(image_chunks, ['chunk1', 'chunk2']) class VdisToTarStreamTestCase(base.TestCase): def setUp(self): super(VdisToTarStreamTestCase, self).setUp() self.context = mock.Mock() self.session = mock.Mock() self.instance = {'name': 'instance-001'} self.host_url = "http://fake-host.com" self.stream = mock.Mock() @mock.patch.object(tarfile, 'open') @mock.patch.object(tarfile, 'TarInfo') @mock.patch.object(vdi_handler.VdisToTarStream, '_connect_request', return_value='fake-conn-resp') @mock.patch.object(vhd_utils, 'VHDDynDiskParser') @mock.patch.object(utils, 'create_pipe') @mock.patch.object(vdi_handler.VdisToTarStream, 'convert_vhd_to_tar') @mock.patch.object(eventlet.GreenPool, 'spawn') @mock.patch.object(vdi_handler.VdisToTarStream, '_vhd_to_pipe') @mock.patch.object(eventlet.GreenPool, 'waitall') def test_start(self, mock_waitall, mock_to_pipe, mock_spawn, mock_convert, mock_pipe, mock_parser, mock_conn_req, mock_tarinfo, mock_open): mock_tarfile = mock.MagicMock() mock_tarfile.__enter__.return_value = mock_tarfile mock_open.return_value = mock_tarfile mock_tarinfo.return_value = mock.sentinel.tar_info self.session.VDI.get_by_uuid.return_value = 'fake-vdi-ref' mock_dynDisk = mock.Mock() mock_parser.return_value = mock_dynDisk mock_dynDisk.get_vhd_file_size.return_value = 29371904 vdi_uuids = ['vdi-uuid'] vhdpipe_in = mock.Mock() mock_pipe.return_value = ('vhdpipe_out', vhdpipe_in) image_cmd = vdi_handler.VdisToTarStream( self.context, self.session, self.instance, self.host_url, vdi_uuids, self.stream) image_cmd.start() mock_open.assert_called_once_with(fileobj=self.stream, mode='w|gz') self.session.VDI.get_by_uuid.assert_called_once_with('vdi-uuid') mock_conn_req.assert_called_once_with('fake-vdi-ref') mock_dynDisk.get_vhd_file_size.assert_called_once_with() mock_pipe.assert_called_once_with() mock_spawn.assert_called_once_with(mock_convert, 'vhdpipe_out', mock_tarfile, mock.sentinel.tar_info) mock_to_pipe.assert_called_once_with(mock_dynDisk, vhdpipe_in) vhdpipe_in.close.asset_called_once_with() mock_waitall.assert_called_once_with() class AddVhdToTarTestCase(base.TestCase): def setUp(self): super(AddVhdToTarTestCase, self).setUp() self.context = mock.Mock() self.session = mock.Mock() self.instance = {'name': 'instance-001'} self.host_url = "http://fake-host.com" self.stream = mock.Mock() def test_add_stream_to_tar(self): mock_tar_file = mock.Mock() mock_tar_info = mock.Mock() mock_tar_info.size = 8196 mock_tar_info.name = '0.vhd' image_cmd = vdi_handler.AddVhdToTar(mock_tar_file, mock_tar_info, 'fake-vhdpipe-out') image_cmd.start() mock_tar_file.addfile.assert_called_once_with( mock_tar_info, fileobj='fake-vhdpipe-out') def test_add_stream_to_tar_IOError(self): mock_tar_file = mock.Mock() mock_tar_info = mock.Mock() mock_tar_info.size = 1024 mock_tar_info.name = '0.vhd' image_cmd = vdi_handler.AddVhdToTar(mock_tar_file, mock_tar_info, 'fake-vhdpipe-out') mock_tar_file.addfile.side_effect = IOError self.assertRaises(IOError, image_cmd.start) os-xenapi-0.3.1/os_xenapi/tests/plugins/0000775000175000017500000000000013160424745021362 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/plugins/__init__.py0000664000175000017500000000000013160424533023454 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/os_xenapi/tests/plugins/test_xenhost.py0000664000175000017500000012765113160424533024472 0ustar jenkinsjenkins00000000000000# Copyright (c) 2017 Citrix Systems, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. try: import json except ImportError: import simplejson as json import mock from mock import call from os_xenapi.tests.plugins import plugin_test import time class FakeXenAPIException(Exception): pass class XenHostRunCmdTestCase(plugin_test.PluginTestBase): def setUp(self): super(XenHostRunCmdTestCase, self).setUp() self.host = self.load_plugin("xenhost.py") self.pluginlib = self.load_plugin("dom0_pluginlib.py") def test_run_command(self): self.mock_patch_object(self.host.utils, 'run_command', 'fake_run_cmd_return') cmd_result = self.host._run_command('fake_command') self.assertEqual(cmd_result, 'fake_run_cmd_return') self.host.utils.run_command.assert_called_with( 'fake_command', cmd_input=None) def test_run_command_exception(self): side_effect = [self.host.utils.SubprocessException( 'fake_cmdline', 0, 'fake_out', 'fake_err')] self.mock_patch_object(self.host.utils, 'run_command', 'fake_run_cmd_return') self.host.utils.run_command.side_effect = side_effect self.assertRaises(self.pluginlib.PluginError, self.host._run_command, 'fake_command') self.host.utils.run_command.assert_called_with( 'fake_command', cmd_input=None) class VMOperationTestCase(plugin_test.PluginTestBase): def setUp(self): super(VMOperationTestCase, self).setUp() self.host = self.load_plugin("xenhost.py") self.pluginlib = self.load_plugin("dom0_pluginlib.py") self.mock_patch_object(self.host, '_run_command', 'fake_run_cmd_return') @mock.patch.object(time, 'sleep') def test_resume_compute(self, mock_sleep): self.mock_patch_object(self.session.xenapi.VM, 'start') self.host._resume_compute(self.session, 'fake_compute_ref', 'fake_compute_uuid') self.session.xenapi.VM.start.assert_called_with( 'fake_compute_ref', False, True) mock_sleep.time.assert_not_called() @mock.patch.object(time, 'sleep') def test_resume_compute_exception_compute_VM_restart(self, mock_sleep): side_effect_xenapi_failure = FakeXenAPIException self.mock_patch_object(self.session.xenapi.VM, 'start') self.host.XenAPI.Failure = FakeXenAPIException self.session.xenapi.VM.start.side_effect = \ side_effect_xenapi_failure self.host._resume_compute(self.session, 'fake_compute_ref', 'fake_compute_uuid') self.session.xenapi.VM.start.assert_called_with( 'fake_compute_ref', False, True) self.host._run_command.assert_called_with( ["xe", "vm-start", "uuid=%s" % 'fake_compute_uuid'] ) mock_sleep.assert_not_called() @mock.patch.object(time, 'sleep') def test_resume_compute_exception_wait_slave_available(self, mock_sleep): side_effect_xenapi_failure = FakeXenAPIException side_effect_plugin_error = [self.pluginlib.PluginError( "Wait for the slave to become available"), None] self.mock_patch_object(self.session.xenapi.VM, 'start') self.session.xenapi.VM.start.side_effect = \ side_effect_xenapi_failure self.host._run_command.side_effect = side_effect_plugin_error self.host.XenAPI.Failure = FakeXenAPIException expected = [call(["xe", "vm-start", "uuid=%s" % 'fake_compute_uuid']), call(["xe", "vm-start", "uuid=%s" % 'fake_compute_uuid'])] self.host._resume_compute(self.session, 'fake_compute_ref', 'fake_compute_uuid') self.session.xenapi.VM.start.assert_called_with( 'fake_compute_ref', False, True) self.assertEqual(expected, self.host._run_command.call_args_list) mock_sleep.assert_called_once() @mock.patch.object(time, 'sleep') def test_resume_compute_exception_unrecoverable(self, mock_sleep): fake_compute_ref = -1 side_effect_xenapi_failure = FakeXenAPIException side_effect_plugin_error = ( [self.pluginlib.PluginError] * self.host.DEFAULT_TRIES) self.mock_patch_object(self.session.xenapi.VM, 'start') self.session.xenapi.VM.start.side_effect = \ side_effect_xenapi_failure self.host.XenAPI.Failure = FakeXenAPIException self.host._run_command.side_effect = side_effect_plugin_error self.assertRaises(self.pluginlib.PluginError, self.host._resume_compute, self.session, fake_compute_ref, 'fake_compute_uuid') self.session.xenapi.VM.start.assert_called_with( -1, False, True) self.host._run_command.assert_called_with( ["xe", "vm-start", "uuid=%s" % 'fake_compute_uuid'] ) mock_sleep.assert_called() def test_set_host_enabled_no_enabled_key_in_arg_dict(self): temp_dict = {} self.assertRaises(self.pluginlib.PluginError, self.host.set_host_enabled, self.host, temp_dict) def test_set_host_enabled_unexpected_enabled_key(self): temp_dict = {} temp_dict.update({'enabled': 'unexpected_status'}) temp_dict.update({'host_uuid': 'fake_host_uuid'}) self.assertRaises(self.pluginlib.PluginError, self.host.set_host_enabled, self.host, temp_dict) def test_set_host_enabled_host_enable_disable_cmd_return_not_empty(self): temp_dict = {} temp_dict.update({'enabled': 'true'}) temp_dict.update({'host_uuid': 'fake_host_uuid'}) fake_run_command_return = 'not empty' self.host._run_command.return_value = fake_run_command_return self.assertRaises(self.pluginlib.PluginError, self.host.set_host_enabled, self.host, temp_dict) self.host._run_command.assert_called_once def test_set_host_enabled_request_host_enabled(self): temp_dict = {} side_effects = ['', 'any_value'] temp_dict.update({'enabled': 'true'}) temp_dict.update({'host_uuid': 'fake_host_uuid'}) expected = [call(['xe', 'host-enable', 'uuid=fake_host_uuid']), call(['xe', 'host-param-get', 'uuid=fake_host_uuid', 'param-name=enabled'])] self.host._run_command.side_effect = side_effects self.host.set_host_enabled(self.host, temp_dict) self.assertEqual(self.host._run_command.call_args_list, expected) def test_set_host_enabled_request_cmd_host_disable(self): temp_dict = {} side_effects = ['', 'any_value'] temp_dict.update({'enabled': 'false'}) temp_dict.update({'host_uuid': 'fake_host_uuid'}) expected = [call(["xe", "host-disable", "uuid=%s" % temp_dict['host_uuid']],), call(["xe", "host-param-get", "uuid=%s" % temp_dict['host_uuid'], "param-name=enabled"],)] self.host._run_command.side_effect = side_effects self.host.set_host_enabled(self.host, temp_dict) self.assertEqual(self.host._run_command.call_args_list, expected) def test_set_host_enabled_confirm_host_enabled(self): temp_dict = {} side_effects = ['', 'true'] temp_dict.update({'enabled': 'true'}) temp_dict.update({'host_uuid': 'fake_host_uuid'}) self.host._run_command.side_effect = side_effects result_status = self.host.set_host_enabled(self.host, temp_dict) self.assertEqual(result_status, '{"status": "enabled"}') def test_set_host_enabled_confirm_host_disabled(self): temp_dict = {} side_effects = ['', 'any_value'] temp_dict.update({'enabled': 'false'}) temp_dict.update({'host_uuid': 'fake_host_uuid'}) self.host._run_command.side_effect = side_effects result_status = self.host.set_host_enabled(self.host, temp_dict) self.assertEqual(result_status, '{"status": "disabled"}') class HostOptTestCase(plugin_test.PluginTestBase): def setUp(self): super(HostOptTestCase, self).setUp() self.host = self.load_plugin("xenhost.py") self.pluginlib = self.load_plugin("dom0_pluginlib.py") self.mock_patch_object(self.host, '_run_command', 'fake_run_cmd_return') # may be removed because the operation would be deprecate def test_power_action_disable_cmd_result_not_empty(self): temp_arg_dict = {'host_uuid': 'fake_host_uuid'} self.host._run_command.return_value = 'not_empty' expected_cmd_arg = ["xe", "host-disable", "uuid=%s" % 'fake_host_uuid'] self.assertRaises(self.pluginlib.PluginError, self.host._power_action, 'fake_action', temp_arg_dict) self.host._run_command.assert_called_with(expected_cmd_arg) # may be removed because the operation would be deprecate def test_power_action_shutdown_cmd_result_not_empty(self): side_effects = [None, 'not_empty'] temp_arg_dict = {'host_uuid': 'fake_host_uuid'} self.host._run_command.side_effect = side_effects expected_cmd_arg_list = [call(["xe", "host-disable", "uuid=%s" % 'fake_host_uuid']), call(["xe", "vm-shutdown", "--multiple", "resident-on=%s" % 'fake_host_uuid'])] self.assertRaises(self.pluginlib.PluginError, self.host._power_action, 'fake_action', temp_arg_dict) self.assertEqual(self.host._run_command.call_args_list, expected_cmd_arg_list) # may be removed because the operation would be deprecate def test_power_action_input_cmd_result_not_empty(self): side_effects = [None, None, 'not_empty'] temp_arg_dict = {'host_uuid': 'fake_host_uuid'} self.host._run_command.side_effect = side_effects cmds = {"reboot": "host-reboot", "startup": "host-power-on", "shutdown": "host-shutdown"} fake_action = 'reboot' # 'statup' and 'shutdown' should be same expected_cmd_arg_list = [call(["xe", "host-disable", "uuid=%s" % 'fake_host_uuid']), call(["xe", "vm-shutdown", "--multiple", "resident-on=%s" % 'fake_host_uuid']), call(["xe", cmds[fake_action], "uuid=%s" % 'fake_host_uuid'])] self.assertRaises(self.pluginlib.PluginError, self.host._power_action, fake_action, temp_arg_dict) self.assertEqual(self.host._run_command.call_args_list, expected_cmd_arg_list) def test_power_action(self): temp_arg_dict = {'host_uuid': 'fake_host_uuid'} self.host._run_command.return_value = None cmds = {"reboot": "host-reboot", "startup": "host-power-on", "shutdown": "host-shutdown"} fake_action = 'reboot' # 'statup' and 'shutdown' should be same expected_cmd_arg_list = [call(["xe", "host-disable", "uuid=%s" % 'fake_host_uuid']), call(["xe", "vm-shutdown", "--multiple", "resident-on=%s" % 'fake_host_uuid']), call(["xe", cmds[fake_action], "uuid=%s" % 'fake_host_uuid'])] expected_result = {"power_action": fake_action} action_result = self.host._power_action(fake_action, temp_arg_dict) self.assertEqual(self.host._run_command.call_args_list, expected_cmd_arg_list) self.assertEqual(action_result, expected_result) def test_host_reboot(self): fake_action = 'reboot' self.mock_patch_object(self.host, '_power_action', 'fake_action_result') self.host.host_reboot(self.host, 'fake_arg_dict') self.host._power_action.assert_called_with(fake_action, 'fake_arg_dict') def test_host_shutdown(self): fake_action = 'shutdown' self.mock_patch_object(self.host, '_power_action', 'fake_action_result') self.host.host_shutdown(self.host, 'fake_arg_dict') self.host._power_action.assert_called_with(fake_action, 'fake_arg_dict') def test_host_start(self): fake_action = 'startup' self.mock_patch_object(self.host, '_power_action', 'fake_action_result') self.host.host_start(self.host, 'fake_arg_dict') self.host._power_action.assert_called_with(fake_action, 'fake_arg_dict') def test_host_join(self): temp_arg_dict = {'url': 'fake_url', 'user': 'fake_user', 'password': 'fake_password', 'master_addr': 'fake_master_addr', 'master_user': 'fake_master_user', 'master_pass': 'fake_master_pass', 'compute_uuid': 'fake_compute_uuid'} self.mock_patch_object(self.host, '_resume_compute') self.host.XenAPI = mock.Mock() self.host.XenAPI.Session = mock.Mock() self.host.host_join(self.host, temp_arg_dict) self.host.XenAPI.Session().login_with_password.assert_called_once() self.host.XenAPI.Session().xenapi.pool.join.assert_called_with( 'fake_master_addr', 'fake_master_user', 'fake_master_pass') self.host._resume_compute.assert_called_with( self.host.XenAPI.Session(), self.host.XenAPI.Session().xenapi.VM.get_by_uuid( 'fake_compute_uuid'), 'fake_compute_uuid') def test_host_join_force_join(self): temp_arg_dict = {'force': 'true', 'master_addr': 'fake_master_addr', 'master_user': 'fake_master_user', 'master_pass': 'fake_master_pass', 'compute_uuid': 'fake_compute_uuid'} self.mock_patch_object(self.host, '_resume_compute') self.host.XenAPI = mock.Mock() self.host.XenAPI.Session = mock.Mock() self.host.host_join(self.host, temp_arg_dict) self.host.XenAPI.Session().login_with_password.assert_called_once() self.host.XenAPI.Session().xenapi.pool.join_force.assert_called_with( 'fake_master_addr', 'fake_master_user', 'fake_master_pass') self.host._resume_compute.assert_called_with( self.host.XenAPI.Session(), self.host.XenAPI.Session().xenapi.VM.get_by_uuid( 'fake_compute_uuid'), 'fake_compute_uuid') def test_host_data(self): temp_arg_dict = {'host_uuid': 'fake_host_uuid'} fake_dict_after_cleanup = {'new_key': 'new_value'} fake_config_setting = {'config': 'fake_config_setting'} self.host._run_command.return_value = 'fake_resp' self.mock_patch_object(self.host, 'parse_response', 'fake_parsed_data') self.mock_patch_object(self.host, 'cleanup', fake_dict_after_cleanup) self.mock_patch_object(self.host, '_get_config_dict', fake_config_setting) expected_ret_dict = fake_dict_after_cleanup expected_ret_dict.update(fake_config_setting) return_host_data = self.host.host_data(self.host, temp_arg_dict) self.host._run_command.assert_called_with( ["xe", "host-param-list", "uuid=%s" % temp_arg_dict['host_uuid']] ) self.host.parse_response.assert_called_with('fake_resp') self.host.cleanup('fake_parsed_data') self.host._get_config_dict.assert_called_once() self.assertEqual(expected_ret_dict, json.loads(return_host_data)) def test_parse_response(self): fake_resp = 'fake_name ( fake_flag): fake_value' expected_parsed_resp = {'fake_name': 'fake_value'} result_data = self.host.parse_response(fake_resp) self.assertEqual(result_data, expected_parsed_resp) def test_parse_response_one_invalid_line(self): fake_resp = "(exeception line)\n \ fake_name ( fake_flag): fake_value" expected_parsed_resp = {'fake_name': 'fake_value'} result_data = self.host.parse_response(fake_resp) self.assertEqual(result_data, expected_parsed_resp) def test_host_uptime(self): self.host._run_command.return_value = 'fake_uptime' uptime_return = self.host.host_uptime(self.host, 'fake_arg_dict') self.assertEqual(uptime_return, '{"uptime": "fake_uptime"}') class ConfigOptTestCase(plugin_test.PluginTestBase): def setUp(self): super(ConfigOptTestCase, self).setUp() self.host = self.load_plugin("xenhost.py") self.pluginlib = self.load_plugin("dom0_pluginlib.py") self.mock_patch_object(self.host, '_run_command', 'fake_run_cmd_return') def test_get_config_no_config_key(self): temp_dict = {'params': '{"key": "fake_key"}'} fake_conf_dict = {} self.mock_patch_object(self.host, '_get_config_dict', fake_conf_dict) config_return = self.host.get_config(self.host, temp_dict) self.assertEqual(json.loads(config_return), "None") self.host._get_config_dict.assert_called_once() def test_get_config_json(self): temp_dict = {'params': '{"key": "fake_key"}'} fake_conf_dict = {'fake_key': 'fake_conf_key'} self.mock_patch_object(self.host, '_get_config_dict', fake_conf_dict) config_return = self.host.get_config(self.host, temp_dict) self.assertEqual(json.loads(config_return), 'fake_conf_key') self.host._get_config_dict.assert_called_once() def test_get_config_dict(self): temp_dict = {'params': {"key": "fake_key"}} fake_conf_dict = {'fake_key': 'fake_conf_key'} self.mock_patch_object(self.host, '_get_config_dict', fake_conf_dict) config_return = self.host.get_config(self.host, temp_dict) self.assertEqual(json.loads(config_return), 'fake_conf_key') self.host._get_config_dict.assert_called_once() def test_set_config_remove_none_key(self): temp_arg_dict = {'params': {"key": "fake_key", "value": None}} temp_conf = {'fake_key': 'fake_value'} self.mock_patch_object(self.host, '_get_config_dict', temp_conf) self.mock_patch_object(self.host, '_write_config_dict') self.host.set_config(self.host, temp_arg_dict) self.assertTrue("fake_key" not in temp_conf) self.host._get_config_dict.assert_called_once() self.host._write_config_dict.assert_called_with(temp_conf) def test_set_config_overwrite_key_value(self): temp_arg_dict = {'params': {"key": "fake_key", "value": "new_value"}} temp_conf = {'fake_key': 'fake_value'} self.mock_patch_object(self.host, '_get_config_dict', temp_conf) self.mock_patch_object(self.host, '_write_config_dict') self.host.set_config(self.host, temp_arg_dict) self.assertTrue('fake_key' in temp_conf) self.host._get_config_dict.assert_called_once() temp_conf.update({'fake_key': 'new_value'}) self.host._write_config_dict.assert_called_with(temp_conf) class NetworkTestCase(plugin_test.PluginTestBase): def setUp(self): super(NetworkTestCase, self).setUp() self.host = self.load_plugin("xenhost.py") self.pluginlib = self.load_plugin("dom0_pluginlib.py") self.mock_patch_object(self.host, '_run_command', 'fake_run_cmd_return') def test_ovs_add_patch_port(self): brige_name = 'fake_brige_name' port_name = 'fake_port_name' peer_port_name = 'fake_peer_port_name' side_effects = [brige_name, port_name, peer_port_name] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', port_name, '--', 'add-port', brige_name, 'fake_port_name', '--', 'set', 'interface', 'fake_port_name', 'type=patch', 'options:peer=%s' % peer_port_name] expected_pluginlib_arg_list = [call('fake_args', 'bridge_name'), call('fake_args', 'port_name'), call('fake_args', 'peer_port_name')] self.host._ovs_add_patch_port('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_ovs_del_port(self): bridge_name = 'fake_brige_name' port_name = 'fake_port_name' side_effects = [bridge_name, port_name] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', bridge_name, port_name] expected_pluginlib_arg_list = [call('fake_args', 'bridge_name'), call('fake_args', 'port_name')] self.host._ovs_del_port('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_ovs_del_br(self): bridge_name = 'fake_brige_name' self.mock_patch_object(self.pluginlib, 'exists', bridge_name) expected_cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-br', bridge_name] self.host._ovs_del_br('fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'bridge_name') self.host._run_command.assert_called_with(expected_cmd_args) def test_ovs_set_if_external_id(self): interface = 'fake_interface' extneral_id = 'fake_extneral_id' value = 'fake_value' side_effects = [interface, extneral_id, value] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ovs-vsctl', 'set', 'Interface', interface, 'external-ids:%s=%s' % (extneral_id, value)] expected_pluginlib_arg_list = [call('fake_args', 'interface'), call('fake_args', 'extneral_id'), call('fake_args', 'value')] self.host._ovs_set_if_external_id('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_ovs_add_port(self): bridge_name = 'fake_brige_name' port_name = 'fake_port_name' side_effects = [bridge_name, port_name] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', port_name, '--', 'add-port', bridge_name, port_name] expected_pluginlib_arg_list = [call('fake_args', 'bridge_name'), call('fake_args', 'port_name')] self.host._ovs_add_port('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_ovs_create_port(self): bridge_name = 'fake_brige_name' port_name = 'fake_port_name' iface_id = 'fake_iface_id' mac = 'fake_mac' status = 'fake_status' side_effects = [bridge_name, port_name, iface_id, mac, status] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', port_name, '--', 'add-port', bridge_name, port_name, '--', 'set', 'Interface', port_name, 'external_ids:iface-id=%s' % iface_id, 'external_ids:iface-status=%s' % status, 'external_ids:attached-mac=%s' % mac, 'external_ids:xs-vif-uuid=%s' % iface_id] expected_pluginlib_arg_list = [call('fake_args', 'bridge'), call('fake_args', 'port'), call('fake_args', 'iface-id'), call('fake_args', 'mac'), call('fake_args', 'status')] self.host._ovs_create_port('fake_args') self.pluginlib.exists.assert_called() self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_ip_link_get_dev(self): device_name = 'fake_device_name' expected_cmd_args = ['ip', 'link', 'show', device_name] self.mock_patch_object(self.pluginlib, 'exists', device_name) self.host._ip_link_get_dev('fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'device_name') self.host._run_command.assert_called_with(expected_cmd_args) def test_ip_link_del_dev(self): device_name = 'fake_device_name' expected_cmd_args = ['ip', 'link', 'delete', device_name] self.mock_patch_object(self.pluginlib, 'exists', 'fake_device_name') self.host._ip_link_del_dev('fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'device_name') self.host._run_command.assert_called_with(expected_cmd_args) def test_ip_link_add_veth_pair(self): dev1_name = 'fake_brige_name' dev2_name = 'fake_port_name' side_effects = [dev1_name, dev2_name] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ip', 'link', 'add', dev1_name, 'type', 'veth', 'peer', 'name', dev2_name] expected_pluginlib_arg_list = [call('fake_args', 'dev1_name'), call('fake_args', 'dev2_name')] self.host._ip_link_add_veth_pair('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_ip_link_set_dev(self): device_name = 'fake_device_name' option = 'fake_option' side_effects = [device_name, option] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ip', 'link', 'set', device_name, option] expected_pluginlib_arg_list = [call('fake_args', 'device_name'), call('fake_args', 'option')] self.host._ip_link_set_dev('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_ip_link_set_promisc(self): device_name = 'fake_device_name' option = 'fake_option' side_effects = [device_name, option] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['ip', 'link', 'set', device_name, 'promisc', option] expected_pluginlib_arg_list = [call('fake_args', 'device_name'), call('fake_args', 'option')] self.host._ip_link_set_promisc('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_brctl_add_br(self): bridge_name = 'fake_bridge_name' cmd_args = 'fake_option' side_effects = [bridge_name, cmd_args] self.pluginlib.exists.side_effect = side_effects self.mock_patch_object(self.pluginlib, 'exists', bridge_name) expected_cmd_args = ['brctl', 'addbr', bridge_name] self.host._brctl_add_br('fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'bridge_name') self.host._run_command.assert_called_with(expected_cmd_args) def test_brctl_del_br(self): bridge_name = 'fake_bridge_name' self.mock_patch_object(self.pluginlib, 'exists', bridge_name) expected_cmd_args = ['brctl', 'delbr', bridge_name] self.host._brctl_del_br('fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'bridge_name') self.host._run_command.assert_called_with(expected_cmd_args) def test_brctl_set_fd(self): bridge_name = 'fake_device_name' fd = 'fake_fd' side_effects = [bridge_name, fd] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['brctl', 'setfd', bridge_name, fd] expected_pluginlib_arg_list = [call('fake_args', 'bridge_name'), call('fake_args', 'fd')] self.host._brctl_set_fd('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_brctl_set_stp(self): bridge_name = 'fake_device_name' option = 'fake_option' side_effects = [bridge_name, option] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['brctl', 'stp', bridge_name, option] expected_pluginlib_arg_list = [call('fake_args', 'bridge_name'), call('fake_args', 'option')] self.host._brctl_set_stp('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_brctl_add_if(self): bridge_name = 'fake_device_name' if_name = 'fake_if_name' side_effects = [bridge_name, if_name] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['brctl', 'addif', bridge_name, if_name] expected_pluginlib_arg_list = [call('fake_args', 'bridge_name'), call('fake_args', 'interface_name')] self.host._brctl_add_if('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) def test_brctl_del_if(self): bridge_name = 'fake_device_name' if_name = 'fake_if_name' side_effects = [bridge_name, if_name] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects expected_cmd_args = ['brctl', 'delif', bridge_name, if_name] expected_pluginlib_arg_list = [call('fake_args', 'bridge_name'), call('fake_args', 'interface_name')] self.host._brctl_del_if('fake_args') self.host._run_command.assert_called_with(expected_cmd_args) self.assertEqual(self.pluginlib.exists.call_args_list, expected_pluginlib_arg_list) @mock.patch.object(json, 'loads') def test_iptables_config(self, mock_json_loads): self.mock_patch_object(self.pluginlib, 'exists', 'fake_cmd_args') self.mock_patch_object(self.pluginlib, 'optional', 'fake_cmd_pro_input') self.host._run_command.return_value = 'fake_run_cmd_resule' mock_json_loads.return_value = ['iptables-save'] expected = json.dumps(dict(out='fake_run_cmd_resule', err='')) ret_str = self.host.iptables_config('fake_session', 'fake_args') self.pluginlib.exists.assert_called_once() self.pluginlib.optional.assert_called_once() mock_json_loads.assert_called_with('fake_cmd_args') self.assertEqual(ret_str, expected) self.host._run_command.assert_called_with( map(str, ['iptables-save']), 'fake_cmd_pro_input') @mock.patch.object(json, 'loads') def test_iptables_config_plugin_error(self, mock_json_loads): self.mock_patch_object(self.pluginlib, 'exists') self.mock_patch_object(self.pluginlib, 'optional') self.assertRaises(self.pluginlib.PluginError, self.host.iptables_config, 'fake_session', 'fake_args') self.pluginlib.exists.assert_called_once() self.pluginlib.optional.assert_called_once() mock_json_loads.assert_called_with(None) def test_network_config_invalid_cmd(self): fake_invalid_cmd = 0 self.mock_patch_object(self.pluginlib, 'exists', fake_invalid_cmd) self.assertRaises(self.pluginlib.PluginError, self.host.network_config, 'fake_session', 'fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'cmd') def test_network_config_unexpected_cmd(self): fake_unexpected_cmd = 'fake_unknow_cmd' self.mock_patch_object(self.pluginlib, 'exists', fake_unexpected_cmd) self.assertRaises(self.pluginlib.PluginError, self.host.network_config, 'fake_session', 'fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'cmd') def test_network_config(self): fake_valid_cmd = 'ovs_add_patch_port' side_effects = [fake_valid_cmd, 'fake_cmd_args'] self.mock_patch_object(self.pluginlib, 'exists') self.pluginlib.exists.side_effect = side_effects mock_func = self.mock_patch_object(self.host, '_ovs_add_patch_port') self.host.ALLOWED_NETWORK_CMDS['ovs_add_patch_port'] = mock_func self.host.network_config('fake_session', 'fake_args') self.pluginlib.exists.assert_called_with('fake_args', 'args') self.host._ovs_add_patch_port.assert_called_with('fake_cmd_args') class XenHostTestCase(plugin_test.PluginTestBase): def setUp(self): super(XenHostTestCase, self).setUp() self.host = self.load_plugin("xenhost.py") self.pluginlib = self.load_plugin("dom0_pluginlib.py") self.mock_patch_object(self.host, '_run_command', 'fake_run_cmd_return') def test_clean_up(self): fake_arg_dict = { 'enabled': 'enabled', 'memory-total': '0', 'memory-overhead': '1', 'memory-free': '2', 'memory-free-computed': '3', 'uuid': 'fake_uuid', 'name-label': 'fake_name-label', 'name-description': 'fake_name-description', 'hostname': 'fake_hostname', 'address': 'fake_address', 'other-config': 'config:fake_other-config_1; \ config:fake_other-config_2', 'capabilities': 'fake_cap_1; fake_cap_2', 'cpu_info': 'cpu_count:1; family:101; unknow:1' } expected_out = { 'enabled': 'enabled', 'host_memory': {'total': 0, 'overhead': 1, 'free': 2, 'free-computed': 3}, 'host_uuid': 'fake_uuid', 'host_name-label': 'fake_name-label', 'host_name-description': 'fake_name-description', 'host_hostname': 'fake_hostname', 'host_ip_address': 'fake_address', 'host_other-config': {'config': 'fake_other-config_1', 'config': 'fake_other-config_2'}, 'host_capabilities': ['fake_cap_1', 'fake_cap_2'], 'host_cpu_info': {'cpu_count': 1, 'family': 101, 'unknow': '1'} } out = self.host.cleanup(fake_arg_dict) self.assertEqual(out, expected_out) def test_clean_up_exception_invalid_memory_value(self): fake_arg_dict = { 'enabled': 'enabled', 'memory-total': 'invalid', 'memory-overhead': 'invalid', 'memory-free': 'invalid', 'memory-free-computed': 'invalid', 'uuid': 'fake_uuid', 'name-label': 'fake_name-label', 'name-description': 'fake_name-description', 'hostname': 'fake_hostname', 'address': 'fake_address', 'other-config': 'config:fake_other-config_1; \ config:fake_other-config_2', 'capabilities': 'fake_cap_1; fake_cap_2', 'cpu_info': 'cpu_count:1; family:101; unknow:1' } expected_out = { 'enabled': 'enabled', 'host_memory': {'total': None, 'overhead': None, 'free': None, 'free-computed': None}, 'host_uuid': 'fake_uuid', 'host_name-label': 'fake_name-label', 'host_name-description': 'fake_name-description', 'host_hostname': 'fake_hostname', 'host_ip_address': 'fake_address', 'host_other-config': {'config': 'fake_other-config_1', 'config': 'fake_other-config_2'}, 'host_capabilities': ['fake_cap_1', 'fake_cap_2'], 'host_cpu_info': {'cpu_count': 1, 'family': 101, 'unknow': '1'} } out = self.host.cleanup(fake_arg_dict) self.assertEqual(out, expected_out) def test_query_gc_running(self): fake_cmd_result = "Currently running: True" self.host._run_command.return_value = fake_cmd_result query_gc_result = self.host.query_gc('fake_session', 'fake_sr_uuid', 'fake_vdi_uuid') self.assertTrue(query_gc_result) self.host._run_command.assert_called_with( ["/opt/xensource/sm/cleanup.py", "-q", "-u", 'fake_sr_uuid']) def test_query_gc_not_running(self): fake_cmd_result = "Currently running: False" self.host._run_command.return_value = fake_cmd_result query_gc_result = self.host.query_gc('fake_session', 'fake_sr_uuid', 'fake_vdi_uuid') self.assertFalse(query_gc_result) self.host._run_command.assert_called_with( ["/opt/xensource/sm/cleanup.py", "-q", "-u", 'fake_sr_uuid']) def test_get_pci_device_details(self): self.host.get_pci_device_details('fake_session') self.host._run_command.assert_called_with( ["lspci", "-vmmnk"]) def test_get_pci_type_no_domain(self): fake_pci_device = '00:00.0' self.host._run_command.return_value = ['fake_pci_type', ] self.host.get_pci_type('fake_session', fake_pci_device) self.host._run_command.assert_called_with( ["ls", "/sys/bus/pci/devices/" + '0000:' + fake_pci_device + "/"]) def test_get_pci_type_physfn(self): fake_pci_device = '0000:00:00.0' self.host._run_command.return_value = ['physfn', ] output = self.host.get_pci_type('fake_session', fake_pci_device) self.host._run_command.assert_called_with( ["ls", "/sys/bus/pci/devices/" + fake_pci_device + "/"]) self.assertEqual(output, 'type-VF') def test_get_pci_type_virtfn(self): fake_pci_device = '0000:00:00.0' self.host._run_command.return_value = ['virtfn', ] output = self.host.get_pci_type('fake_session', fake_pci_device) self.host._run_command.assert_called_with( ["ls", "/sys/bus/pci/devices/" + fake_pci_device + "/"]) self.assertEqual(output, 'type-PF') def test_get_pci_type_PCI(self): fake_pci_device = '0000:00:00.0' self.host._run_command.return_value = ['other', ] output = self.host.get_pci_type('fake_session', fake_pci_device) self.host._run_command.assert_called_with( ["ls", "/sys/bus/pci/devices/" + fake_pci_device + "/"]) self.assertEqual(output, 'type-PCI') os-xenapi-0.3.1/os_xenapi/tests/plugins/test_dom0_plugin_version.py0000664000175000017500000000217213160424533026752 0ustar jenkinsjenkins00000000000000# Copyright (c) 2016 Citrix Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os_xenapi.tests.plugins import plugin_test class Dom0PluginVersion(plugin_test.PluginTestBase): def setUp(self): super(Dom0PluginVersion, self).setUp() self.dom0_plugin_version = self.load_plugin('dom0_plugin_version.py') def test_dom0_plugin_version(self): session = 'fake_session' expected_value = self.dom0_plugin_version.PLUGIN_VERSION return_value = self.dom0_plugin_version.get_version(session) self.assertEqual(expected_value, return_value) os-xenapi-0.3.1/os_xenapi/tests/plugins/plugin_test.py0000664000175000017500000000451213160424533024266 0ustar jenkinsjenkins00000000000000# Copyright (c) 2016 Citrix Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import imp import os import sys import mock from os_xenapi.client import session from os_xenapi.tests import base # both XenAPI and XenAPIPlugin may not exist # in unit test environment. sys.modules['XenAPI'] = mock.Mock() sys.modules['XenAPIPlugin'] = mock.Mock() class PluginTestBase(base.TestCase): def setUp(self): super(PluginTestBase, self).setUp() self.session = mock.Mock() session.apply_session_helpers(self.session) def mock_patch_object(self, target, attribute, return_val=None): # utility function to mock object's attribute patcher = mock.patch.object(target, attribute, return_value=return_val) mock_one = patcher.start() self.addCleanup(patcher.stop) return mock_one def _get_plugin_path(self): current_path = os.path.realpath(__file__) rel_path = os.path.join(current_path, "../../../dom0/etc/xapi.d/plugins") plugin_path = os.path.abspath(rel_path) return plugin_path def load_plugin(self, file_name): # XAPI plugins run in a py24 environment and may be not compatible with # py34's syntax. In order to prevent unit test scanning the source file # under py34 environment, the plugins will be imported with this # function at run time. plugin_path = self._get_plugin_path() # add plugin path into search path. if plugin_path not in sys.path: sys.path.append(plugin_path) # be sure not to create c files next to the plugins sys.dont_write_bytecode = True name = file_name.split('.')[0] path = os.path.join(plugin_path, file_name) return imp.load_source(name, path) os-xenapi-0.3.1/os_xenapi/tests/plugins/test_agent.py0000664000175000017500000004367613160424533024104 0ustar jenkinsjenkins00000000000000# Copyright (c) 2017 Citrix Systems, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import mock from os_xenapi.tests.plugins import plugin_test import time try: import json except ImportError: import simplejson as json # global variable definition for fake arg FAKE_ARG_DICT = {'id': 'fake_id', 'pub': 'fake_pub', 'enc_pass': 'fake_enc_pass', 'dom_id': 'fake_dom_id', 'url': 'fake_url', 'b64_path': 'fake_b64_path=', 'b64_contents': 'fake_b64_contents=', 'md5sum': 'fake_md5sum'} class FakeTimeoutException(Exception): def __init__(self, details): self.details = details class AgentTestCase(plugin_test.PluginTestBase): def setUp(self): super(AgentTestCase, self).setUp() self.agent = self.load_plugin("agent.py") self.mock_patch_object(self.agent, '_wait_for_agent', "fake_wait_agent_return") self.mock_patch_object(self.agent.xenstore, 'write_record', 'fake_write_recode_return') def test_version(self): tmp_arg_dict = FAKE_ARG_DICT tmp_arg_dict["value"] = json.dumps({"name": "version", "value": "agent"}) request_id = tmp_arg_dict["id"] tmp_arg_dict["path"] = "data/host/%s" % request_id self.agent.version(self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_version_timout_exception(self): tmp_arg_dict = FAKE_ARG_DICT tmp_arg_dict["value"] = json.dumps({"name": "version", "value": "agent"}) request_id = tmp_arg_dict["id"] tmp_arg_dict["path"] = "data/host/%s" % request_id side_effects = [FakeTimeoutException('TIME_OUT')] self.agent.PluginError = FakeTimeoutException self.agent._wait_for_agent.side_effect = side_effects self.assertRaises(self.agent.PluginError, self.agent.version, self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_key_init_ok(self): tmp_arg_dict = FAKE_ARG_DICT pub = tmp_arg_dict["pub"] tmp_arg_dict["value"] = json.dumps({"name": "keyinit", "value": pub}) request_id = tmp_arg_dict["id"] tmp_arg_dict["path"] = "data/host/%s" % request_id self.agent.key_init(self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_key_init_timout_exception(self): tmp_arg_dict = FAKE_ARG_DICT pub = tmp_arg_dict["pub"] tmp_arg_dict["value"] = json.dumps({"name": "keyinit", "value": pub}) request_id = tmp_arg_dict["id"] tmp_arg_dict["path"] = "data/host/%s" % request_id side_effects = [FakeTimeoutException('TIME_OUT')] self.agent.PluginError = FakeTimeoutException self.agent._wait_for_agent.side_effect = side_effects self.assertRaises(self.agent.PluginError, self.agent.key_init, self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_password_ok(self): tmp_arg_dict = FAKE_ARG_DICT enc_pass = tmp_arg_dict["enc_pass"] tmp_arg_dict["value"] = json.dumps({"name": "password", "value": enc_pass}) request_id = tmp_arg_dict["id"] tmp_arg_dict["path"] = "data/host/%s" % request_id self.agent.password(self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_password_timout_exception(self): tmp_arg_dict = FAKE_ARG_DICT enc_pass = tmp_arg_dict["enc_pass"] tmp_arg_dict["value"] = json.dumps({"name": "password", "value": enc_pass}) request_id = tmp_arg_dict["id"] tmp_arg_dict["path"] = "data/host/%s" % request_id side_effects = [FakeTimeoutException('TIME_OUT')] self.agent.PluginError = FakeTimeoutException self.agent._wait_for_agent.side_effect = side_effects self.assertRaises(self.agent.PluginError, self.agent.password, self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_reset_network_ok(self): tmp_arg_dict = FAKE_ARG_DICT tmp_arg_dict['value'] = json.dumps({'name': 'resetnetwork', 'value': ''}) request_id = tmp_arg_dict['id'] tmp_arg_dict['path'] = "data/host/%s" % request_id self.agent.resetnetwork(self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_reset_network_timout_exception(self): tmp_arg_dict = FAKE_ARG_DICT tmp_arg_dict['value'] = json.dumps({'name': 'resetnetwork', 'value': ''}) request_id = tmp_arg_dict['id'] tmp_arg_dict['path'] = "data/host/%s" % request_id side_effects = [FakeTimeoutException('TIME_OUT')] self.agent.PluginError = FakeTimeoutException self.agent._wait_for_agent.side_effect = side_effects self.assertRaises(self.agent.PluginError, self.agent.resetnetwork, self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_inject_file_with_new_agent(self): tmp_arg_dict = FAKE_ARG_DICT request_id = tmp_arg_dict["id"] b64_path = tmp_arg_dict["b64_path"] b64_file = tmp_arg_dict["b64_contents"] self.mock_patch_object(self.agent, '_get_agent_features', 'file_inject') tmp_arg_dict["value"] = json.dumps({"name": "file_inject", "value": {"b64_path": b64_path, "b64_file": b64_file}}) tmp_arg_dict["path"] = "data/host/%s" % request_id self.agent.inject_file(self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) self.agent._get_agent_features.assert_called_once() def test_inject_file_with_old_agent(self): tmp_arg_dict = FAKE_ARG_DICT request_id = tmp_arg_dict["id"] b64_path = tmp_arg_dict["b64_path"] b64_file = tmp_arg_dict["b64_contents"] raw_path = base64.b64decode(b64_path) raw_file = base64.b64decode(b64_file) new_b64 = base64.b64encode("%s,%s" % (raw_path, raw_file)) self.mock_patch_object(self.agent, '_get_agent_features', 'injectfile') tmp_arg_dict["value"] = json.dumps({"name": "injectfile", "value": new_b64}) tmp_arg_dict["path"] = "data/host/%s" % request_id self.agent.inject_file(self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) self.agent._get_agent_features.assert_called_once() def test_inject_file_NotImp_exception(self): self.mock_patch_object(self.agent, '_get_agent_features', 'fake_not_imp_exp') self.assertRaises(NotImplementedError, self.agent.inject_file, self.agent, FAKE_ARG_DICT) self.agent._get_agent_features.assert_called_once() def test_inject_file_Timeout_exception(self): tmp_arg_dict = FAKE_ARG_DICT request_id = tmp_arg_dict["id"] b64_path = tmp_arg_dict["b64_path"] b64_file = tmp_arg_dict["b64_contents"] tmp_arg_dict["value"] = json.dumps({"name": "file_inject", "value": {"b64_path": b64_path, "b64_file": b64_file}}) tmp_arg_dict["path"] = "data/host/%s" % request_id self.mock_patch_object(self.agent, '_get_agent_features', 'file_inject') side_effects = [FakeTimeoutException('TIME_OUT')] self.agent.PluginError = FakeTimeoutException self.agent._wait_for_agent.side_effect = side_effects self.assertRaises(self.agent.PluginError, self.agent.inject_file, self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_agent_update_ok(self): tmp_arg_dict = FAKE_ARG_DICT request_id = tmp_arg_dict["id"] url = tmp_arg_dict["url"] md5sum = tmp_arg_dict["md5sum"] tmp_arg_dict["value"] = json.dumps({"name": "agentupdate", "value": "%s,%s" % (url, md5sum)}) tmp_arg_dict["path"] = "data/host/%s" % request_id self.agent.agent_update(self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_with(self.agent, tmp_arg_dict) def test_agent_update_timout_exception(self): tmp_arg_dict = FAKE_ARG_DICT request_id = tmp_arg_dict["id"] url = tmp_arg_dict["url"] md5sum = tmp_arg_dict["md5sum"] tmp_arg_dict["value"] = json.dumps({"name": "agentupdate", "value": "%s,%s" % (url, md5sum)}) tmp_arg_dict["path"] = "data/host/%s" % request_id side_effects = [FakeTimeoutException('TIME_OUT')] self.agent.PluginError = FakeTimeoutException self.agent._wait_for_agent.side_effect = side_effects self.assertRaises(self.agent.PluginError, self.agent.agent_update, self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_once() def test_get_agent_features_returncode_0(self): self.mock_patch_object(self.agent.json, 'loads', {'returncode': 0}) featrues_ret = self.agent._get_agent_features(self.agent, FAKE_ARG_DICT) self.assertFalse(bool(featrues_ret)) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_once() def test_get_agent_features_returncode_not_0(self): self.mock_patch_object(self.agent, '_wait_for_agent', 'fake_wait_agent_return') self.mock_patch_object(self.agent.json, 'loads', {'returncode': 'fake_return_code', 'message': 'fake_message'}) featrues_ret = self.agent._get_agent_features(self.agent, FAKE_ARG_DICT) self.assertTrue(bool(featrues_ret)) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_once() def test_get_agent_features_timout_exception(self): side_effects = [FakeTimeoutException('TIME_OUT')] self.agent.PluginError = FakeTimeoutException self.agent._wait_for_agent.side_effect = side_effects self.assertRaises(self.agent.PluginError, self.agent._get_agent_features, self.agent, FAKE_ARG_DICT) self.agent._wait_for_agent.assert_called_once() self.agent.xenstore.write_record.assert_called_once() class WaitForAgentTestCase(plugin_test.PluginTestBase): def setUp(self): super(WaitForAgentTestCase, self).setUp() self.agent = self.load_plugin("agent.py") def test_wait_for_agent_ok(self): tmp_arg_dict = FAKE_ARG_DICT tmp_arg_dict["path"] = "data/guest/%s" % 'fake_id' tmp_arg_dict["ignore_missing_path"] = True self.mock_patch_object(self.agent.xenstore, 'read_record', 'fake_read_record') ret_str = self.agent._wait_for_agent(self.agent, 'fake_id', FAKE_ARG_DICT, self.agent.DEFAULT_TIMEOUT) self.agent.xenstore.read_record.assert_called_with(self.agent, tmp_arg_dict) self.assertEqual(ret_str, 'fake_read_record') @mock.patch.object(time, 'sleep') def test_wait_for_agent_reboot_detected_exception(self, mock_sleep): tmp_arg_dict = FAKE_ARG_DICT tmp_arg_dict["path"] = "data/guest/%s" % 'fake_id' tmp_arg_dict["ignore_missing_path"] = True self.mock_patch_object(self.agent.xenstore, 'read_record', '"None"') self.mock_patch_object(self.agent.xenstore, 'record_exists', False) self.mock_patch_object(self.agent.xenstore, 'delete_record', 'fake_del_record') self.assertRaises(self.agent.RebootDetectedError, self.agent._wait_for_agent, self.agent, 'fake_id', FAKE_ARG_DICT, self.agent.DEFAULT_TIMEOUT) self.agent.xenstore.read_record.assert_called_with(self.agent, tmp_arg_dict) exists_args = { "dom_id": tmp_arg_dict["dom_id"], "path": "name", } self.agent.xenstore.record_exists.assert_called_with(exists_args) tmp_arg_dict["path"] = "data/host/%s" % 'fake_id' self.agent.xenstore.delete_record.assert_called_with(self.agent, tmp_arg_dict) @mock.patch.object(time, 'sleep') @mock.patch.object(time, 'time') def test_wait_for_agent_timeout_exception(self, mock_time, mock_sleep): tmp_arg_dict = FAKE_ARG_DICT tmp_arg_dict["path"] = "data/guest/%s" % 'fake_id' tmp_arg_dict["ignore_missing_path"] = True self.mock_patch_object(self.agent.xenstore, 'read_record', '"None"') self.mock_patch_object(self.agent.xenstore, 'record_exists', True) self.mock_patch_object(self.agent.xenstore, 'delete_record', 'fake_del_record') mock_time.side_effect = list(range(self.agent.DEFAULT_TIMEOUT + 1)) self.assertRaises(self.agent.TimeoutError, self.agent._wait_for_agent, self.agent, 'fake_id', FAKE_ARG_DICT, self.agent.DEFAULT_TIMEOUT) self.agent.xenstore.read_record.assert_called_with(self.agent, tmp_arg_dict) exists_args = { "dom_id": tmp_arg_dict["dom_id"], "path": "name", } self.agent.xenstore.record_exists.assert_called_with(exists_args) tmp_arg_dict["path"] = "data/host/%s" % 'fake_id' self.agent.xenstore.delete_record.assert_called_with(self.agent, tmp_arg_dict) self.assertEqual(self.agent.DEFAULT_TIMEOUT - 1, self.agent.xenstore.read_record.call_count) os-xenapi-0.3.1/os_xenapi/tests/plugins/test_partition_utils.py0000664000175000017500000001532513160424533026225 0ustar jenkinsjenkins00000000000000# Copyright (c) 2016 Citrix Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_xenapi.client import exception from os_xenapi.tests.plugins import plugin_test class PartitionUtils(plugin_test.PluginTestBase): def setUp(self): super(PartitionUtils, self).setUp() self.pluginlib = self.load_plugin("dom0_pluginlib.py") # Prevent any logging to syslog self.mock_patch_object(self.pluginlib, 'configure_logging') self.partition_utils = self.load_plugin("partition_utils.py") def test_wait_for_dev_ok(self): mock_sleep = self.mock_patch_object(self.partition_utils.time, 'sleep') mock_exists = self.mock_patch_object(self.partition_utils.os.path, 'exists') mock_exists.side_effect = [False, True] ret = self.partition_utils.wait_for_dev('session', '/fake', 2) self.assertEqual(1, mock_sleep.call_count) self.assertEqual(ret, "/fake") def test_wait_for_dev_timeout(self): mock_sleep = self.mock_patch_object(self.partition_utils.time, 'sleep') mock_exists = self.mock_patch_object(self.partition_utils.os.path, 'exists') mock_exists.side_effect = [False, False, True] ret = self.partition_utils.wait_for_dev('session', '/fake', 2) self.assertEqual(2, mock_sleep.call_count) self.assertEqual(ret, "") def test_mkfs_removes_partitions_ok(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') mock__mkfs = self.mock_patch_object(self.partition_utils, '_mkfs') self.partition_utils.mkfs('session', 'fakedev', '1', 'ext3', 'label') mock__mkfs.assert_called_with('ext3', '/dev/mapper/fakedevp1', 'label') expected_calls = [mock.call(['kpartx', '-avspp', '/dev/fakedev'])] expected_calls.append(mock.call(['kpartx', '-dvspp', '/dev/fakedev'])) mock_run.assert_has_calls(expected_calls) def test_mkfs_removes_partitions_exc(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') mock__mkfs = self.mock_patch_object(self.partition_utils, '_mkfs') mock__mkfs.side_effect = exception.OsXenApiException( message="partition failed") self.assertRaises(exception.OsXenApiException, self.partition_utils.mkfs, 'session', 'fakedev', '1', 'ext3', 'label') expected_calls = [mock.call(['kpartx', '-avspp', '/dev/fakedev'])] expected_calls.append(mock.call(['kpartx', '-dvspp', '/dev/fakedev'])) mock_run.assert_has_calls(expected_calls) def test_mkfs_ext3_no_label(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') self.partition_utils._mkfs('ext3', '/dev/sda1', None) mock_run.assert_called_with(['mkfs', '-t', 'ext3', '-F', '/dev/sda1']) def test_mkfs_ext3(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') self.partition_utils._mkfs('ext3', '/dev/sda1', 'label') mock_run.assert_called_with(['mkfs', '-t', 'ext3', '-F', '-L', 'label', '/dev/sda1']) def test_mkfs_swap(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') self.partition_utils._mkfs('swap', '/dev/sda1', 'ignored') mock_run.assert_called_with(['mkswap', '/dev/sda1']) def test_make_partition_sfdisk_v213(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') mock_get_version = self.mock_patch_object( self.partition_utils, '_get_sfdisk_version') mock_get_version.return_value = '2.13' self.partition_utils.make_partition('session', 'dev', 'start', '-') mock_get_version.assert_called_with() mock_run.assert_called_with(['sfdisk', '-uS', '/dev/dev'], 'start,;\n') def test_make_partition_sfdisk_v223(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') mock_get_version = self.mock_patch_object( self.partition_utils, '_get_sfdisk_version') mock_get_version.return_value = '2.23' self.partition_utils.make_partition('session', 'dev', 'start', '-') mock_get_version.assert_called_with() mock_run.assert_called_with(['sfdisk', '--force', '-uS', '/dev/dev'], 'start,;\n') def test_make_partition_sfdisk_v226(self): mock_run = self.mock_patch_object(self.partition_utils.utils, 'run_command') mock_get_version = self.mock_patch_object( self.partition_utils, '_get_sfdisk_version') mock_get_version.return_value = '2.26' self.partition_utils.make_partition('session', 'dev', 'start', '-') mock_get_version.assert_called_with() mock_run.assert_called_with(['sfdisk', '-uS', '/dev/dev'], 'start,;\n') def test_get_sfdisk_version_213pre7(self): mock_run = self.mock_patch_object( self.partition_utils.utils, 'run_command') mock_run.return_value = 'sfdisk (util-linux 2.13-pre7)' version = self.partition_utils._get_sfdisk_version() mock_run.assert_called_with(['/sbin/sfdisk', '-v']) self.assertEqual(version, '2.13') def test_get_sfdisk_version_223(self): mock_run = self.mock_patch_object( self.partition_utils.utils, 'run_command') mock_run.return_value = 'sfdisk from util-linux 2.23.2\n' version = self.partition_utils._get_sfdisk_version() mock_run.assert_called_with(['/sbin/sfdisk', '-v']) self.assertEqual(version, '2.23') os-xenapi-0.3.1/os_xenapi/tests/plugins/test_glance.py0000664000175000017500000013040613160424533024223 0ustar jenkinsjenkins00000000000000# Copyright (c) 2017 Citrix Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import sys try: import httplib import urllib2 from urllib2 import HTTPError from urllib2 import URLError from urlparse import urlparse except ImportError: # make py3.x happy: it's needed for script parsing, although this test # is excluded from py3.x testing import http.client as httplib from urllib.error import HTTPError from urllib.error import URLError from urllib.parse import urlparse import urllib.request as urllib2 import json from os_xenapi.tests.plugins import plugin_test class FakeXenAPIException(Exception): pass class Fake_HTTP_Request_Error(Exception): pass class GlanceTestCase(plugin_test.PluginTestBase): def setUp(self): super(GlanceTestCase, self).setUp() # md5 is deprecated in py2.7 and forward; sys.modules['md5'] = mock.Mock() self.glance = self.load_plugin("glance.py") @mock.patch.object(httplib, 'HTTPSConnection') def test_create_connection_https(self, mock_HTTPConn): fake_scheme = 'https' fake_netloc = 'fake_netloc' fake_https_return = mock.Mock() mock_HTTPConn.return_value = fake_https_return fake_create_Conn_return = self.glance._create_connection( fake_scheme, fake_netloc) mock_HTTPConn.assert_called_with(fake_netloc) mock_HTTPConn.return_value.connect.assert_called_once() self.assertEqual(fake_https_return, fake_create_Conn_return) @mock.patch.object(httplib, 'HTTPConnection') def test_create_connection_http(self, mock_HTTPConn): fake_scheme = 'http' fake_netloc = 'fake_netloc' fake_https_return = mock.Mock() mock_HTTPConn.return_value = fake_https_return fake_create_Conn_return = self.glance._create_connection( fake_scheme, fake_netloc) mock_HTTPConn.assert_called_with(fake_netloc) mock_HTTPConn.return_value.connect.assert_called_once() self.assertEqual(fake_https_return, fake_create_Conn_return) @mock.patch.object(urllib2, 'urlopen') def test_download_and_verify_ok(self, mock_urlopen): mock_extract_tarball = self.mock_patch_object( self.glance.utils, 'extract_tarball') mock_md5 = mock.Mock() mock_md5.hexdigest.return_value = 'expect_cksum' mock_md5_new = self.mock_patch_object( self.glance.md5, 'new', mock_md5) mock_info = mock.Mock() mock_info.getheader.return_value = 'expect_cksum' mock_urlopen.return_value.info.return_value = mock_info fake_request = urllib2.Request('http://fakeurl.com') self.glance._download_tarball_and_verify( fake_request, 'fake_staging_path') mock_urlopen.assert_called_with(fake_request) mock_extract_tarball.assert_called_once() mock_md5_new.assert_called_once() mock_info.getheader.assert_called_once() mock_md5_new.return_value.hexdigest.assert_called_once() @mock.patch.object(urllib2, 'urlopen') def test_download_ok_extract_failed(self, mock_urlopen): mock_extract_tarball = self.mock_patch_object( self.glance.utils, 'extract_tarball') fake_retcode = 0 mock_extract_tarball.side_effect = \ self.glance.utils.SubprocessException('fake_cmd', fake_retcode, 'fake_out', 'fake_stderr') mock_md5 = mock.Mock() mock_md5.hexdigest.return_value = 'unexpect_cksum' mock_md5_new = self.mock_patch_object( self.glance.md5, 'new', mock_md5) mock_info = mock.Mock() mock_info.getheader.return_value = 'expect_cksum' mock_urlopen.return_value.info.return_value = mock_info fake_request = urllib2.Request('http://fakeurl.com') self.assertRaises(self.glance.RetryableError, self.glance._download_tarball_and_verify, fake_request, 'fake_staging_path' ) mock_urlopen.assert_called_with(fake_request) mock_extract_tarball.assert_called_once() mock_md5_new.assert_called_once() mock_info.getheader.assert_not_called() mock_md5_new.hexdigest.assert_not_called() @mock.patch.object(urllib2, 'urlopen') def test_download_ok_verify_failed(self, mock_urlopen): mock_extract_tarball = self.mock_patch_object( self.glance.utils, 'extract_tarball') mock_md5 = mock.Mock() mock_md5.hexdigest.return_value = 'unexpect_cksum' mock_md5_new = self.mock_patch_object( self.glance.md5, 'new', mock_md5) mock_info = mock.Mock() mock_info.getheader.return_value = 'expect_cksum' mock_urlopen.return_value.info.return_value = mock_info fake_request = urllib2.Request('http://fakeurl.com') self.assertRaises(self.glance.RetryableError, self.glance._download_tarball_and_verify, fake_request, 'fake_staging_path' ) mock_urlopen.assert_called_with(fake_request) mock_extract_tarball.assert_called_once() mock_md5_new.assert_called_once() mock_md5_new.return_value.hexdigest.assert_called_once() @mock.patch.object(urllib2, 'urlopen') def test_download_failed_HTTPError(self, mock_urlopen): mock_urlopen.side_effect = HTTPError( None, None, None, None, None) fake_request = urllib2.Request('http://fakeurl.com') self.assertRaises( self.glance.RetryableError, self.glance._download_tarball_and_verify, fake_request, 'fake_staging_path') @mock.patch.object(urllib2, 'urlopen') def test_download_failed_URLError(self, mock_urlopen): mock_urlopen.side_effect = URLError(None) fake_request = urllib2.Request('http://fakeurl.com') self.assertRaises( self.glance.RetryableError, self.glance._download_tarball_and_verify, fake_request, 'fake_staging_path') @mock.patch.object(urllib2, 'urlopen') def test_download_failed_HTTPException(self, mock_urlopen): mock_urlopen.side_effect = httplib.HTTPException() fake_request = urllib2.Request('http://fakeurl.com') self.assertRaises( self.glance.RetryableError, self.glance._download_tarball_and_verify, fake_request, 'fake_staging_path') @mock.patch.object(urllib2, 'Request') def test_download_tarball_by_url_v1(self, mock_request): fake_glance_endpoint = 'fake_glance_endpoint' fake_image_id = 'fake_extra_headers' expected_url = "%(glance_endpoint)s/v1/images/%(image_id)s" % { 'glance_endpoint': fake_glance_endpoint, 'image_id': fake_image_id} mock_download_tarball_and_verify = self.mock_patch_object( self.glance, '_download_tarball_and_verify') mock_request.return_value = 'fake_request' self.glance._download_tarball_by_url_v1( 'fake_sr_path', 'fake_staging_path', fake_image_id, fake_glance_endpoint, 'fake_extra_headers') mock_request.assert_called_with(expected_url, headers='fake_extra_headers') mock_download_tarball_and_verify.assert_called_with( 'fake_request', 'fake_staging_path') @mock.patch.object(urllib2, 'Request') def test_download_tarball_by_url_v2(self, mock_request): fake_glance_endpoint = 'fake_glance_endpoint' fake_image_id = 'fake_extra_headers' expected_url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': fake_glance_endpoint, 'image_id': fake_image_id} mock_download_tarball_and_verify = self.mock_patch_object( self.glance, '_download_tarball_and_verify') mock_request.return_value = 'fake_request' self.glance._download_tarball_by_url_v2( 'fake_sr_path', 'fake_staging_path', fake_image_id, fake_glance_endpoint, 'fake_extra_headers') mock_request.assert_called_with(expected_url, headers='fake_extra_headers') mock_download_tarball_and_verify.assert_called_with( 'fake_request', 'fake_staging_path') def test_upload_tarball_by_url_http_v1(self): fake_conn = mock.Mock() mock_HTTPConn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v1') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = httplib.OK fake_extra_headers = {} fake_properties = {} fake_endpoint = 'http://fake_netloc/fake_path' expected_url = "%(glance_endpoint)s/v1/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': 'fake_image_id'} self.glance._upload_tarball_by_url_v1( 'fake_staging_path', 'fake_image_id', fake_endpoint, fake_extra_headers, fake_properties) self.assertTrue(mock_HTTPConn.called) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_HTTPConn.return_value.getresponse.called) self.assertFalse(mock_check_resp_status.called) def test_upload_tarball_by_url_https_v1(self): fake_conn = mock.Mock() mock_HTTPSConn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v1') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = httplib.OK fake_extra_headers = {} fake_properties = {} fake_endpoint = 'https://fake_netloc/fake_path' expected_url = "%(glance_endpoint)s/v1/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': 'fake_image_id'} self.glance._upload_tarball_by_url_v1( 'fake_staging_path', 'fake_image_id', fake_endpoint, fake_extra_headers, fake_properties) self.assertTrue(mock_HTTPSConn.called) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_HTTPSConn.return_value.getresponse.called) self.assertFalse(mock_check_resp_status.called) def test_upload_tarball_by_url_https_failed_retry_v1(self): fake_conn = mock.Mock() mock_HTTPSConn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v1') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = \ httplib.REQUEST_TIMEOUT fake_extra_headers = {} fake_properties = {} fake_endpoint = 'https://fake_netloc/fake_path' expected_url = "%(glance_endpoint)s/v1/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': 'fake_image_id'} self.glance._upload_tarball_by_url_v1( 'fake_staging_path', 'fake_image_id', fake_endpoint, fake_extra_headers, fake_properties) self.assertTrue(mock_HTTPSConn.called) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_HTTPSConn.return_value.getresponse.called) self.assertTrue(mock_check_resp_status.called) def test_upload_tarball_by_url_http_v2(self): fake_conn = mock.Mock() mock_HTTPConn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v2') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_update_image_meta = self.mock_patch_object( self.glance, '_update_image_meta_v2') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = \ httplib.NO_CONTENT fake_extra_headers = {} fake_properties = {} fake_endpoint = 'http://fake_netloc/fake_path' fake_image_id = 'fake_image_id' expected_url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} expected_wsgi_path = '/fake_path/v2/images/%s' % fake_image_id expect_url_parts = urlparse(expected_url) expected_mgt_url = "%(glance_endpoint)s/v2/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} fake_mgt_parts = urlparse(expected_mgt_url) fake_mgt_path = fake_mgt_parts[2] self.glance._upload_tarball_by_url_v2( 'fake_staging_path', fake_image_id, fake_endpoint, fake_extra_headers, fake_properties) mock_HTTPConn.assert_called_with(expect_url_parts[0], expect_url_parts[1]) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers, expected_wsgi_path) mock_update_image_meta.assert_called_with(fake_conn, fake_extra_headers, fake_properties, fake_mgt_path) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_HTTPConn.return_value.getresponse.called) self.assertFalse(mock_check_resp_status.called) def test_upload_tarball_by_url_https_v2(self): fake_conn = mock.Mock() mock_HTTPSConn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v2') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_update_image_meta = self.mock_patch_object( self.glance, '_update_image_meta_v2') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = \ httplib.NO_CONTENT fake_extra_headers = {} fake_properties = {} fake_endpoint = 'https://fake_netloc/fake_path' fake_image_id = 'fake_image_id' expected_url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} expect_url_parts = urlparse(expected_url) expected_wsgi_path = '/fake_path/v2/images/%s' % fake_image_id expected_mgt_url = "%(glance_endpoint)s/v2/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} fake_mgt_parts = urlparse(expected_mgt_url) fake_mgt_path = fake_mgt_parts[2] self.glance._upload_tarball_by_url_v2( 'fake_staging_path', fake_image_id, fake_endpoint, fake_extra_headers, fake_properties) mock_update_image_meta.assert_called_with(fake_conn, fake_extra_headers, fake_properties, fake_mgt_path) mock_HTTPSConn.assert_called_with(expect_url_parts[0], expect_url_parts[1]) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers, expected_wsgi_path) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_HTTPSConn.return_value.getresponse.called) self.assertFalse(mock_check_resp_status.called) def test_upload_tarball_by_url_v2_with_api_endpoint(self): fake_conn = mock.Mock() mock_Conn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v2') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_update_image_meta = self.mock_patch_object( self.glance, '_update_image_meta_v2') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = \ httplib.NO_CONTENT fake_extra_headers = {} fake_properties = {} fake_endpoint = 'https://fake_netloc:fake_port' fake_image_id = 'fake_image_id' expected_url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} expect_url_parts = urlparse(expected_url) expected_api_path = '/v2/images/%s' % fake_image_id expected_mgt_url = "%(glance_endpoint)s/v2/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} fake_mgt_parts = urlparse(expected_mgt_url) fake_mgt_path = fake_mgt_parts[2] self.glance._upload_tarball_by_url_v2( 'fake_staging_path', fake_image_id, fake_endpoint, fake_extra_headers, fake_properties) mock_update_image_meta.assert_called_with(fake_conn, fake_extra_headers, fake_properties, fake_mgt_path) mock_Conn.assert_called_with(expect_url_parts[0], expect_url_parts[1]) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers, expected_api_path) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_Conn.return_value.getresponse.called) self.assertFalse(mock_check_resp_status.called) def test_upload_tarball_by_url_v2_with_wsgi_endpoint(self): fake_conn = mock.Mock() mock_Conn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v2') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_update_image_meta = self.mock_patch_object( self.glance, '_update_image_meta_v2') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = \ httplib.NO_CONTENT fake_extra_headers = {} fake_properties = {} fake_endpoint = 'https://fake_netloc/fake_path' fake_image_id = 'fake_image_id' expected_url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} expect_url_parts = urlparse(expected_url) expected_wsgi_path = '/fake_path/v2/images/%s' % fake_image_id expected_mgt_url = "%(glance_endpoint)s/v2/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} fake_mgt_parts = urlparse(expected_mgt_url) fake_mgt_path = fake_mgt_parts[2] self.glance._upload_tarball_by_url_v2( 'fake_staging_path', fake_image_id, fake_endpoint, fake_extra_headers, fake_properties) mock_update_image_meta.assert_called_with(fake_conn, fake_extra_headers, fake_properties, fake_mgt_path) mock_Conn.assert_called_with(expect_url_parts[0], expect_url_parts[1]) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers, expected_wsgi_path) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_Conn.return_value.getresponse.called) self.assertFalse(mock_check_resp_status.called) def test_upload_tarball_by_url_https_failed_retry_v2(self): fake_conn = mock.Mock() mock_HTTPSConn = self.mock_patch_object( self.glance, '_create_connection', fake_conn) mock_validate_image = self.mock_patch_object( self.glance, 'validate_image_status_before_upload_v2') mock_create_tarball = self.mock_patch_object( self.glance.utils, 'create_tarball') mock_check_resp_status = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_update_image_meta = self.mock_patch_object( self.glance, '_update_image_meta_v2') self.glance._create_connection().getresponse = mock.Mock() self.glance._create_connection().getresponse().status = \ httplib.REQUEST_TIMEOUT fake_extra_headers = {} fake_properties = {} fake_endpoint = 'https://fake_netloc/fake_path' fake_image_id = 'fake_image_id' expected_url = "%(glance_endpoint)s/v2/images/%(image_id)s/file" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} expected_wsgi_path = '/fake_path/v2/images/%s' % fake_image_id expected_mgt_url = "%(glance_endpoint)s/v2/images/%(image_id)s" % { 'glance_endpoint': fake_endpoint, 'image_id': fake_image_id} expect_url_parts = urlparse(expected_url) fake_mgt_parts = urlparse(expected_mgt_url) fake_mgt_path = fake_mgt_parts[2] self.glance._upload_tarball_by_url_v2( 'fake_staging_path', fake_image_id, fake_endpoint, fake_extra_headers, fake_properties) mock_update_image_meta.assert_called_with(fake_conn, fake_extra_headers, fake_properties, fake_mgt_path) mock_HTTPSConn.assert_called_with(expect_url_parts[0], expect_url_parts[1]) mock_validate_image.assert_called_with(fake_conn, expected_url, fake_extra_headers, expected_wsgi_path) self.assertTrue(mock_create_tarball.called) self.assertTrue( mock_HTTPSConn.return_value.getresponse.called) self.assertTrue(mock_check_resp_status.called) def test_update_image_meta_ok_v2_using_api_service(self): fake_conn = mock.Mock() fake_extra_headers = {'fake_type': 'fake_content'} fake_properties = {'fake_path': True} new_fake_properties = {'path': '/fake-path', 'value': "True", 'op': 'add'} fake_body = [ {"path": "/container_format", "value": "ovf", "op": "add"}, {"path": "/disk_format", "value": "vhd", "op": "add"}, {"path": "/visibility", "value": "private", "op": "add"}] fake_body.append(new_fake_properties) fake_body_json = json.dumps(fake_body) fake_headers = { 'Content-Type': 'application/openstack-images-v2.1-json-patch'} fake_headers.update(**fake_extra_headers) fake_conn.getresponse.return_value = mock.Mock() fake_conn.getresponse().status = httplib.OK expected_api_path = '/v2/images/%s' % 'fake_image_id' self.glance._update_image_meta_v2(fake_conn, fake_extra_headers, fake_properties, expected_api_path) fake_conn.request.assert_called_with('PATCH', '/v2/images/%s' % 'fake_image_id', body=fake_body_json, headers=fake_headers) fake_conn.getresponse.assert_called() def test_update_image_meta_ok_v2_using_uwsgi_service(self): fake_conn = mock.Mock() fake_extra_headers = {'fake_type': 'fake_content'} fake_properties = {'fake_path': True} new_fake_properties = {'path': '/fake-path', 'value': "True", 'op': 'add'} fake_body = [ {"path": "/container_format", "value": "ovf", "op": "add"}, {"path": "/disk_format", "value": "vhd", "op": "add"}, {"path": "/visibility", "value": "private", "op": "add"}] fake_body.append(new_fake_properties) fake_body_json = json.dumps(fake_body) fake_headers = { 'Content-Type': 'application/openstack-images-v2.1-json-patch'} fake_headers.update(**fake_extra_headers) fake_conn.getresponse.return_value = mock.Mock() fake_conn.getresponse().status = httplib.OK expected_wsgi_path = '/fake_path/v2/images/%s' % 'fake_image_id' self.glance._update_image_meta_v2(fake_conn, fake_extra_headers, fake_properties, expected_wsgi_path) fake_conn.request.assert_called_with('PATCH', '/fake_path/v2/images/%s' % 'fake_image_id', body=fake_body_json, headers=fake_headers) fake_conn.getresponse.assert_called() def test_check_resp_status_and_retry_plugin_error(self): mock_resp_badrequest = mock.Mock() mock_resp_badrequest.status = httplib.BAD_REQUEST self.assertRaises( self.glance.PluginError, self.glance.check_resp_status_and_retry, mock_resp_badrequest, 'fake_image_id', 'fake_url') def test_check_resp_status_and_retry_retry_error(self): mock_resp_badgateway = mock.Mock() mock_resp_badgateway.status = httplib.BAD_GATEWAY self.assertRaises( self.glance.RetryableError, self.glance.check_resp_status_and_retry, mock_resp_badgateway, 'fake_image_id', 'fake_url') def test_check_resp_status_and_retry_image_not_found(self): mock_resp_badgateway = mock.Mock() mock_resp_badgateway.status = httplib.NOT_FOUND self.glance.XenAPI.Failure = FakeXenAPIException self.assertRaises( self.glance.XenAPI.Failure, self.glance.check_resp_status_and_retry, mock_resp_badgateway, 'fake_image_id', 'fake_url') def test_check_resp_status_and_retry_unknown_status(self): fake_unknown_http_status = -1 mock_resp_other = mock.Mock() mock_resp_other.status = fake_unknown_http_status self.assertRaises( self.glance.RetryableError, self.glance.check_resp_status_and_retry, mock_resp_other, 'fake_image_id', 'fake_url') def test_validate_image_status_before_upload_ok_v1(self): mock_conn = mock.Mock() fake_url = 'http://fake_host/fake_path/fake_image_id' mock_check_resp_status_and_retry = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_head_resp.read.return_value = 'fakeData' mock_head_resp.getheader.return_value = 'queued' mock_conn.getresponse.return_value = mock_head_resp self.glance.validate_image_status_before_upload_v1( mock_conn, fake_url, extra_headers=mock.Mock()) self.assertTrue(mock_conn.getresponse.called) self.assertEqual(mock_head_resp.read.call_count, 2) self.assertFalse(mock_check_resp_status_and_retry.called) def test_validate_image_status_before_upload_image_status_error_v1(self): mock_conn = mock.Mock() fake_url = 'http://fake_host/fake_path/fake_image_id' mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_head_resp.read.return_value = 'fakeData' mock_head_resp.getheader.return_value = 'not-queued' mock_conn.getresponse.return_value = mock_head_resp self.assertRaises(self.glance.PluginError, self.glance.validate_image_status_before_upload_v1, mock_conn, fake_url, extra_headers=mock.Mock()) mock_conn.request.assert_called_once() mock_conn.getresponse.assert_called_once() self.assertEqual(mock_head_resp.read.call_count, 2) def test_validate_image_status_before_upload_rep_body_too_long_v1(self): mock_conn = mock.Mock() fake_url = 'http://fake_host/fake_path/fake_image_id' mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_head_resp.read.return_value = 'fakeData longer than 8' mock_head_resp.getheader.return_value = 'queued' mock_conn.getresponse.return_value = mock_head_resp self.assertRaises(self.glance.RetryableError, self.glance.validate_image_status_before_upload_v1, mock_conn, fake_url, extra_headers=mock.Mock()) mock_conn.request.assert_called_once() mock_conn.getresponse.assert_called_once() mock_head_resp.read.assert_called_once() def test_validate_image_status_before_upload_req_head_exception_v1(self): mock_conn = mock.Mock() mock_conn.request.side_effect = Fake_HTTP_Request_Error() fake_url = 'http://fake_host/fake_path/fake_image_id' mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_head_resp.read.return_value = 'fakeData' mock_head_resp.getheader.return_value = 'queued' mock_conn.getresponse.return_value = mock_head_resp self.assertRaises(self.glance.RetryableError, self.glance.validate_image_status_before_upload_v1, mock_conn, fake_url, extra_headers=mock.Mock()) mock_conn.request.assert_called_once() mock_head_resp.read.assert_not_called() mock_conn.getresponse.assert_not_called() def test_validate_image_status_before_upload_unexpected_resp_v1(self): mock_conn = mock.Mock() fake_url = 'http://fake_host/fake_path/fake_image_id' parts = urlparse(fake_url) path = parts[2] fake_image_id = path.split('/')[-1] mock_head_resp = mock.Mock() mock_head_resp.status = httplib.BAD_REQUEST mock_head_resp.read.return_value = 'fakeData' mock_head_resp.getheader.return_value = 'queued' mock_conn.getresponse.return_value = mock_head_resp self.mock_patch_object(self.glance, 'check_resp_status_and_retry') self.glance.validate_image_status_before_upload_v1( mock_conn, fake_url, extra_headers=mock.Mock()) self.assertEqual(mock_head_resp.read.call_count, 2) self.glance.check_resp_status_and_retry.assert_called_with( mock_head_resp, fake_image_id, fake_url) mock_conn.request.assert_called_once() def test_validate_image_status_before_upload_ok_v2_using_api_service(self): mock_conn = mock.Mock() fake_url = 'http://fake_host:fake_port/fake_path/fake_image_id' mock_check_resp_status_and_retry = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_head_resp.read.return_value = '{"status": "queued"}' mock_conn.getresponse.return_value = mock_head_resp fake_extra_headers = mock.Mock() expected_api_path = '/v2/images/%s' % 'fake_image_id' self.glance.validate_image_status_before_upload_v2( mock_conn, fake_url, fake_extra_headers, expected_api_path) self.assertTrue(mock_conn.getresponse.called) self.assertEqual( mock_head_resp.read.call_count, 2) self.assertFalse(mock_check_resp_status_and_retry.called) mock_conn.request.assert_called_with('GET', '/v2/images/fake_image_id', headers=fake_extra_headers) def test_validate_image_status_before_upload_ok_v2_using_uwsgi(self): mock_conn = mock.Mock() fake_url = 'http://fake_host/fake_path/fake_image_id' mock_check_resp_status_and_retry = self.mock_patch_object( self.glance, 'check_resp_status_and_retry') mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_head_resp.read.return_value = '{"status": "queued"}' mock_conn.getresponse.return_value = mock_head_resp fake_extra_headers = mock.Mock() fake_patch_path = 'fake_patch_path' self.glance.validate_image_status_before_upload_v2( mock_conn, fake_url, fake_extra_headers, fake_patch_path) self.assertTrue(mock_conn.getresponse.called) self.assertEqual( mock_head_resp.read.call_count, 2) self.assertFalse(mock_check_resp_status_and_retry.called) mock_conn.request.assert_called_with('GET', 'fake_patch_path', headers=fake_extra_headers) def test_validate_image_status_before_upload_get_image_failed_v2(self): mock_conn = mock.Mock() mock_conn.request.side_effect = Fake_HTTP_Request_Error() fake_url = 'http://fake_host/fake_path/fake_image_id' mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_conn.getresponse.return_value = mock_head_resp expected_wsgi_path = '/fake_path/v2/images/%s' % 'fake_image_id' self.assertRaises(self.glance.RetryableError, self.glance.validate_image_status_before_upload_v2, mock_conn, fake_url, mock.Mock(), expected_wsgi_path) mock_conn.request.assert_called_once() mock_head_resp.read.assert_not_called() mock_conn.getresponse.assert_not_called() def test_validate_image_status_before_upload_unexpected_resp_v2(self): mock_conn = mock.Mock() fake_url = 'http://fake_host/fake_path/fake_image_id' self.mock_patch_object(self.glance, 'check_resp_status_and_retry') mock_head_resp = mock.Mock() mock_head_resp.status = httplib.BAD_REQUEST mock_conn.getresponse.return_value = mock_head_resp expected_wsgi_path = '/fake_path/v2/images/%s' % 'fake_image_id' self.glance.validate_image_status_before_upload_v2( mock_conn, fake_url, mock.Mock(), expected_wsgi_path) mock_conn.request.assert_called_once() mock_conn.getresponse.assert_called_once() mock_head_resp.read.assert_called_once() self.glance.check_resp_status_and_retry.assert_called_once() def test_validate_image_status_before_upload_failed_v2(self): mock_conn = mock.Mock() fake_url = 'http://fake_host/fake_path/fake_image_id' mock_head_resp = mock.Mock() mock_head_resp.status = httplib.OK mock_head_resp.read.return_value = '{"status": "not-queued"}' mock_conn.getresponse.return_value = mock_head_resp expected_wsgi_path = '/fake_path/v2/images/%s' % 'fake_image_id' self.assertRaises(self.glance.PluginError, self.glance.validate_image_status_before_upload_v2, mock_conn, fake_url, mock.Mock(), expected_wsgi_path) mock_conn.request.assert_called_once() mock_head_resp.read.assert_called_once() def test_download_vhd2_v1(self): fake_api_version = 1 mock_make_staging_area = self.mock_patch_object( self.glance.utils, 'make_staging_area', 'fake_staging_path') mock_download_tarball_by_url = self.mock_patch_object( self.glance, '_download_tarball_by_url_v1') mock_import_vhds = self.mock_patch_object( self.glance.utils, 'import_vhds') mock_cleanup_staging_area = self.mock_patch_object( self.glance.utils, 'cleanup_staging_area') self.glance.download_vhd2( 'fake_session', 'fake_image_id', 'fake_endpoint', 'fake_uuid_stack', 'fake_sr_path', 'fake_extra_headers', fake_api_version) mock_make_staging_area.assert_called_with('fake_sr_path') mock_download_tarball_by_url.assert_called_with('fake_sr_path', 'fake_staging_path', 'fake_image_id', 'fake_endpoint', 'fake_extra_headers') mock_import_vhds.assert_called_with('fake_sr_path', 'fake_staging_path', 'fake_uuid_stack') mock_cleanup_staging_area.assert_called_with('fake_staging_path') def test_download_vhd2_v2(self): fake_api_version = 2 mock_make_staging_area = self.mock_patch_object( self.glance.utils, 'make_staging_area', 'fake_staging_path') mock_download_tarball_by_url = self.mock_patch_object( self.glance, '_download_tarball_by_url_v2') mock_import_vhds = self.mock_patch_object( self.glance.utils, 'import_vhds') mock_cleanup_staging_area = self.mock_patch_object( self.glance.utils, 'cleanup_staging_area') self.glance.download_vhd2( 'fake_session', 'fake_image_id', 'fake_endpoint', 'fake_uuid_stack', 'fake_sr_path', 'fake_extra_headers', fake_api_version) mock_make_staging_area.assert_called_with('fake_sr_path') mock_download_tarball_by_url.assert_called_with('fake_sr_path', 'fake_staging_path', 'fake_image_id', 'fake_endpoint', 'fake_extra_headers') mock_import_vhds.assert_called_with('fake_sr_path', 'fake_staging_path', 'fake_uuid_stack') mock_cleanup_staging_area.assert_called_with('fake_staging_path') def test_upload_vhd2_v1(self): fake_api_version = 1 mock_make_staging_area = self.mock_patch_object( self.glance.utils, 'make_staging_area', 'fake_staging_path') mock_prepare_staging_area = self.mock_patch_object( self.glance.utils, 'prepare_staging_area') mock_upload_tarball_by_url = self.mock_patch_object( self.glance, '_upload_tarball_by_url_v1') mock_cleanup_staging_area = self.mock_patch_object( self.glance.utils, 'cleanup_staging_area') self.glance.upload_vhd2( 'fake_session', 'fake_vid_uuids', 'fake_image_id', 'fake_endpoint', 'fake_sr_path', 'fake_extra_headers', 'fake_properties', fake_api_version) mock_make_staging_area.assert_called_with('fake_sr_path') mock_upload_tarball_by_url.assert_called_with('fake_staging_path', 'fake_image_id', 'fake_endpoint', 'fake_extra_headers', 'fake_properties') mock_prepare_staging_area.assert_called_with('fake_sr_path', 'fake_staging_path', 'fake_vid_uuids') mock_cleanup_staging_area.assert_called_with('fake_staging_path') def test_upload_vhd2_v2(self): fake_api_version = 2 mock_make_staging_area = self.mock_patch_object( self.glance.utils, 'make_staging_area', 'fake_staging_path') mock_prepare_staging_area = self.mock_patch_object( self.glance.utils, 'prepare_staging_area') mock_upload_tarball_by_url = self.mock_patch_object( self.glance, '_upload_tarball_by_url_v2') mock_cleanup_staging_area = self.mock_patch_object( self.glance.utils, 'cleanup_staging_area') self.glance.upload_vhd2( 'fake_session', 'fake_vid_uuids', 'fake_image_id', 'fake_endpoint', 'fake_sr_path', 'fake_extra_headers', 'fake_properties', fake_api_version) mock_make_staging_area.assert_called_with('fake_sr_path') mock_upload_tarball_by_url.assert_called_with('fake_staging_path', 'fake_image_id', 'fake_endpoint', 'fake_extra_headers', 'fake_properties') mock_prepare_staging_area.assert_called_with('fake_sr_path', 'fake_staging_path', 'fake_vid_uuids') mock_cleanup_staging_area.assert_called_with('fake_staging_path') os-xenapi-0.3.1/os_xenapi/tests/plugins/test_dom0_pluginlib.py0000664000175000017500000002047213160424533025677 0ustar jenkinsjenkins00000000000000# Copyright (c) 2016 Citrix Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os from os_xenapi.tests.plugins import plugin_test import time class FakeUnplugException(Exception): def __init__(self, details): self.details = details class PluginlibDom0(plugin_test.PluginTestBase): def setUp(self): super(PluginlibDom0, self).setUp() self.dom0_pluginlib = self.load_plugin("dom0_pluginlib.py") @mock.patch.object(os.path, 'exists') def test_configure_logging_log_dir_not_exist(self, mock_path_exist): name = 'fake_name' mock_Logger_setLevel = self.mock_patch_object( self.dom0_pluginlib.logging.Logger, 'setLevel') mock_sysh_setLevel = self.mock_patch_object( self.dom0_pluginlib.logging.handlers.SysLogHandler, 'setLevel') mock_Formatter = self.mock_patch_object( self.dom0_pluginlib.logging, 'Formatter') mock_sysh_setFormatter = self.mock_patch_object( self.dom0_pluginlib.logging.handlers.SysLogHandler, 'setFormatter') mock_Logger_addHandler = self.mock_patch_object( self.dom0_pluginlib.logging.Logger, 'addHandler') mock_socket = self.mock_patch_object( self.dom0_pluginlib.logging.handlers.SysLogHandler, '_connect_unixsocket') mock_path_exist.return_value = False self.dom0_pluginlib.configure_logging(name) self.assertTrue(mock_Logger_setLevel.called) self.assertFalse(mock_sysh_setLevel.called) self.assertFalse(mock_Formatter.called) self.assertFalse(mock_sysh_setFormatter.called) self.assertFalse(mock_Logger_addHandler.called) self.assertFalse(mock_socket.called) @mock.patch.object(os.path, 'exists') def test_configure_logging_log(self, mock_path_exist): name = 'fake_name' mock_Logger_setLevel = self.mock_patch_object( self.dom0_pluginlib.logging.Logger, 'setLevel') mock_sysh_setLevel = self.mock_patch_object( self.dom0_pluginlib.logging.handlers.SysLogHandler, 'setLevel') mock_Formatter = self.mock_patch_object( self.dom0_pluginlib.logging, 'Formatter') mock_sysh_setFormatter = self.mock_patch_object( self.dom0_pluginlib.logging.handlers.SysLogHandler, 'setFormatter') mock_Logger_addHandler = self.mock_patch_object( self.dom0_pluginlib.logging.Logger, 'addHandler') mock_socket = self.mock_patch_object( self.dom0_pluginlib.logging.handlers.SysLogHandler, '_connect_unixsocket') mock_path_exist.return_value = True self.dom0_pluginlib.configure_logging(name) self.assertTrue(mock_Logger_setLevel.called) self.assertTrue(mock_sysh_setLevel.called) self.assertTrue(mock_Formatter.called) self.assertTrue(mock_sysh_setFormatter.called) self.assertTrue(mock_Logger_addHandler.called) self.assertTrue(mock_socket.called) def test_exists_ok(self): fake_args = {'k1': 'v1'} self.assertEqual('v1', self.dom0_pluginlib.exists(fake_args, 'k1')) def test_exists_exception(self): fake_args = {'k1': 'v1'} self.assertRaises(self.dom0_pluginlib.ArgumentError, self.dom0_pluginlib.exists, fake_args, 'no_key') def test_optional_exist(self): fake_args = {'k1': 'v1'} self.assertEqual('v1', self.dom0_pluginlib.optional(fake_args, 'k1')) def test_optional_none(self): fake_args = {'k1': 'v1'} self.assertIsNone(self.dom0_pluginlib.optional(fake_args, 'no_key')) def test_get_domain_0(self): mock_get_this_host = self.mock_patch_object( self.session.xenapi.session, 'get_this_host', return_val='fake_host_ref') mock_get_vm_records = self.mock_patch_object( self.session.xenapi.VM, 'get_all_records_where', return_val={"fake_vm_ref": "fake_value"}) ret_value = self.dom0_pluginlib._get_domain_0(self.session) self.assertTrue(mock_get_this_host.called) self.assertTrue(mock_get_vm_records.called) self.assertEqual('fake_vm_ref', ret_value) def test_with_vdi_in_dom0(self): self.mock_patch_object( self.dom0_pluginlib, '_get_domain_0', return_val='fake_dom0_ref') mock_vbd_create = self.mock_patch_object( self.session.xenapi.VBD, 'create', return_val='fake_vbd_ref') mock_vbd_plug = self.mock_patch_object( self.session.xenapi.VBD, 'plug') self.mock_patch_object( self.session.xenapi.VBD, 'get_device', return_val='fake_device_xvda') mock_vbd_unplug_with_retry = self.mock_patch_object( self.dom0_pluginlib, '_vbd_unplug_with_retry') mock_vbd_destroy = self.mock_patch_object( self.session.xenapi.VBD, 'destroy') def handle_function(vbd): # the fake vbd handle function self.assertEqual(vbd, 'fake_device_xvda') self.assertTrue(mock_vbd_plug.called) self.assertFalse(mock_vbd_unplug_with_retry.called) return 'function_called' fake_vdi = 'fake_vdi' return_value = self.dom0_pluginlib.with_vdi_in_dom0( self.session, fake_vdi, False, handle_function) self.assertEqual('function_called', return_value) self.assertTrue(mock_vbd_plug.called) self.assertTrue(mock_vbd_unplug_with_retry.called) self.assertTrue(mock_vbd_destroy.called) args, kwargs = mock_vbd_create.call_args self.assertEqual('fake_dom0_ref', args[0]['VM']) self.assertEqual('RW', args[0]['mode']) def test_vbd_unplug_with_retry_success_at_first_time(self): self.dom0_pluginlib._vbd_unplug_with_retry(self.session, 'fake_vbd_ref') self.assertEqual(1, self.session.xenapi.VBD.unplug.call_count) def test_vbd_unplug_with_retry_detached_already(self): error = FakeUnplugException(['DEVICE_ALREADY_DETACHED']) self.session.xenapi.VBD.unplug.side_effect = error self.dom0_pluginlib.XenAPI.Failure = FakeUnplugException self.dom0_pluginlib._vbd_unplug_with_retry(self.session, 'fake_vbd_ref') self.assertEqual(1, self.session.xenapi.VBD.unplug.call_count) def test_vbd_unplug_with_retry_success_at_second_time(self): side_effects = [FakeUnplugException(['DEVICE_DETACH_REJECTED']), None] self.session.xenapi.VBD.unplug.side_effect = side_effects self.dom0_pluginlib.XenAPI.Failure = FakeUnplugException self.dom0_pluginlib._vbd_unplug_with_retry(self.session, 'fake_vbd_ref') self.assertEqual(2, self.session.xenapi.VBD.unplug.call_count) @mock.patch.object(time, 'sleep') def test_vbd_unplug_with_retry_exceed_max_attempts(self, mock_sleep): side_effects = ([FakeUnplugException(['DEVICE_DETACH_REJECTED'])] * (self.dom0_pluginlib.MAX_VBD_UNPLUG_RETRIES + 1)) self.session.xenapi.VBD.unplug.side_effect = side_effects self.dom0_pluginlib.XenAPI.Failure = FakeUnplugException self.assertRaises(self.dom0_pluginlib.PluginError, self.dom0_pluginlib._vbd_unplug_with_retry, self.session, 'fake_vbd_ref') self.assertEqual(self.dom0_pluginlib.MAX_VBD_UNPLUG_RETRIES, self.session.xenapi.VBD.unplug.call_count) os-xenapi-0.3.1/os_xenapi/tests/plugins/test_bandwidth.py0000664000175000017500000000360313160424533024734 0ustar jenkinsjenkins00000000000000# Copyright (c) 2016 Citrix Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os_xenapi.tests.plugins import plugin_test class BandwidthTestCase(plugin_test.PluginTestBase): def setUp(self): super(BandwidthTestCase, self).setUp() self.pluginlib = self.load_plugin("dom0_pluginlib.py") # Prevent any logging to syslog self.mock_patch_object(self.pluginlib, 'configure_logging') self.bandwidth = self.load_plugin("bandwidth.py") def test_get_bandwitdth_from_proc(self): fake_data = [ 'Inter-| Receive | Transmit', 'if|bw_in i1 i2 i3 i4 i5 i6 i7|bw_out o1 o2 o3 o4 o5 o6 o7', 'xenbr1: 1 0 0 0 0 0 0 0 11 0 0 0 0 0 0 0', 'vif2.0: 2 0 0 0 0 0 0 0 12 0 0 0 0 0 0 0', 'vif2.1: 3 0 0 0 0 0 0 0 13 0 0 0 0 0 0 0', 'vifabcd1234-c: 4 0 0 0 0 0 0 0 14 0 0 0 0 0 0 0\n'] expect_devmap = {'2': {'1': {'bw_in': 13, 'bw_out': 3}, '0': {'bw_in': 12, 'bw_out': 2}}} mock_read_proc_net = self.mock_patch_object( self.bandwidth, '_read_proc_net', return_val=fake_data) devmap = self.bandwidth._get_bandwitdth_from_proc() self.assertTrue(mock_read_proc_net.called) self.assertEqual(devmap, expect_devmap) os-xenapi-0.3.1/os_xenapi/tests/base.py0000664000175000017500000000126713160424533021166 0ustar jenkinsjenkins00000000000000# Copyright 2016 Citrix Systems # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslotest import base class TestCase(base.BaseTestCase): """Test case base class for all unit tests.""" os-xenapi-0.3.1/os_xenapi/tests/test_os_xenapi.py0000664000175000017500000000141013160424533023266 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_os_xenapi ---------------------------------- Tests for `os_xenapi` module. """ from os_xenapi.tests import base class TestOs_xenapi(base.TestCase): def test_something(self): pass os-xenapi-0.3.1/tox.ini0000664000175000017500000000265613160424533016071 0ustar jenkinsjenkins00000000000000[tox] minversion = 2.0 envlist = py35,py27,pypy,pep8 skipsdist = True [testenv] usedevelop = True install_command = {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages} setenv = VIRTUAL_ENV={envdir} BRANCH_NAME=master CLIENT_NAME=os-xenapi PYTHONWARNINGS=default::DeprecationWarning deps = -r{toxinidir}/test-requirements.txt whitelist_externals = find rm commands = find . -type f -name "*.pyc" -delete py27: python setup.py test --slowest --testr-args='{posargs}' py35: ostestr --color --slowest --blacklist_file exclusion_py3.txt [testenv:pep8] commands = flake8 {posargs} [testenv:venv] commands = {posargs} [testenv:cover] # Use python2.7 explicitly so that it's compatible with python2.4 used for plugins. basepython=/usr/bin/python2.7 commands = coverage erase find . -type f -name "*.pyc" -delete coverage run -m unittest discover coverage report coverage html [testenv:docs] commands = python setup.py build_sphinx [testenv:releasenotes] commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [testenv:debug] commands = oslo_debug_helper {posargs} [flake8] # E123, E125 skipped as they are invalid PEP-8. show-source = True ignore = E123,E125 builtins = _ exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build os-xenapi-0.3.1/tools/0000775000175000017500000000000013160424745015712 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/tools/install/0000775000175000017500000000000013160424745017360 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/tools/install/conf/0000775000175000017500000000000013160424745020305 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/tools/install/conf/xenrc0000664000175000017500000000617213160424533021350 0ustar jenkinsjenkins00000000000000#!/bin/bash # # XenServer specific defaults for the /tools/xen/ scripts # Similar to stackrc, you can override these in your localrc # # Name of this guest GUEST_NAME=${GUEST_NAME:-UbuntuVM} # Template cleanup CLEAN_TEMPLATES=${CLEAN_TEMPLATES:-false} TNAME="jeos_template_for_ubuntu" SNAME_TEMPLATE="jeos_snapshot_for_ubuntu" # Size of image VDI_MB=${VDI_MB:-5000} # Devstack now contains many components. 4GB ram is not enough to prevent # swapping and memory fragmentation - the latter of which can cause failures # such as blkfront failing to plug a VBD and lead to random test fails. # # Set to 6GB so an 8GB XenServer VM can have a 1GB Dom0 and leave 1GB for VMs VM_MEM_MB=${VM_MEM_MB:-6144} VM_VDI_GB=${VM_VDI_GB:-30} # VM Password GUEST_PASSWORD=${GUEST_PASSWORD:-admin} # Extracted variables for OpenStack VM network device numbers. # Make sure they form a continuous sequence starting from 0 MGT_DEV_NR=0 # Host Interface, i.e. the interface on the nova vm you want to expose the # services on. Usually the device connected to the management network or the # one connected to the public network is used. HOST_IP_IFACE=${HOST_IP_IFACE:-"eth${MGT_DEV_NR}"} # # Our nova host's network info # # Management network MGT_IP=${MGT_IP:-dhcp} MGT_NETMASK=${MGT_NETMASK:-ignored} # Ubuntu install settings UBUNTU_INST_RELEASE="xenial" UBUNTU_INST_TEMPLATE_NAME="Ubuntu 16.04 (64-bit) for DevStack" # For 12.04 use "precise" and update template name # However, for 12.04, you should be using # XenServer 6.1 and later or XCP 1.6 or later # 11.10 is only really supported with XenServer 6.0.2 and later UBUNTU_INST_ARCH="amd64" UBUNTU_INST_HTTP_HOSTNAME="archive.ubuntu.com" UBUNTU_INST_HTTP_DIRECTORY="/ubuntu" UBUNTU_INST_HTTP_PROXY="" UBUNTU_INST_LOCALE="en_US" UBUNTU_INST_KEYBOARD="us" # network configuration for ubuntu netinstall UBUNTU_INST_IP="dhcp" UBUNTU_INST_NAMESERVERS="" UBUNTU_INST_NETMASK="" UBUNTU_INST_GATEWAY="" # Create a separate xvdb. Tis could be used as a backing device for cinder # volumes. Specify # XEN_XVDB_SIZE_GB=10 # VOLUME_BACKING_DEVICE=/dev/xvdb # in your localrc to avoid kernel lockups: # https://bugs.launchpad.net/cinder/+bug/1023755 # # Set the size to 0 to avoid creation of additional disk. XEN_XVDB_SIZE_GB=0 ## #configuration for openstack ## STACK_LABEL=DEVSTACK ## # configuration for devstack DomU ## # Name of DomU DEV_STACK_DOMU_NAME=${DEV_STACK_DOMU_NAME:-DevStackOSDomU} STACK_USER=stack DOMZERO_USER=domzero # Network mapping. Specify bridge names or network names. Network names may # differ across localised versions of XenServer. If a given bridge/network # was not found, a new network will be created with the specified name. # Get the management network from the XS installation VM_BRIDGE_OR_NET_NAME="OpenStack VM Network" PUB_BRIDGE_OR_NET_NAME="OpenStack Public Network" # Extracted variables for OpenStack VM network device numbers. # Make sure they form a continuous sequence starting from 0 MGT_DEV_NR=0 VM_DEV_NR=1 PUB_DEV_NR=2 # VM Network VM_IP=${VM_IP:-10.255.255.255} VM_NETMASK=${VM_NETMASK:-255.255.255.0} # Public network PUB_IP=${PUB_IP:-172.24.4.1} PUB_NETMASK=${PUB_NETMASK:-255.255.255.0} os-xenapi-0.3.1/tools/install/conf/ubuntupreseed.cfg0000664000175000017500000005073413160424533023664 0ustar jenkinsjenkins00000000000000### Contents of the preconfiguration file (for squeeze) ### Localization # Preseeding only locale sets language, country and locale. d-i debian-installer/locale string en_US # The values can also be preseeded individually for greater flexibility. #d-i debian-installer/language string en #d-i debian-installer/country string NL #d-i debian-installer/locale string en_GB.UTF-8 # Optionally specify additional locales to be generated. #d-i localechooser/supported-locales en_US.UTF-8, nl_NL.UTF-8 # Keyboard selection. # Disable automatic (interactive) keymap detection. d-i console-setup/ask_detect boolean false #d-i keyboard-configuration/modelcode string pc105 d-i keyboard-configuration/layoutcode string us # To select a variant of the selected layout (if you leave this out, the # basic form of the layout will be used): #d-i keyboard-configuration/variantcode string dvorak ### Network configuration # Disable network configuration entirely. This is useful for cdrom # installations on non-networked devices where the network questions, # warning and long timeouts are a nuisance. #d-i netcfg/enable boolean false # netcfg will choose an interface that has link if possible. This makes it # skip displaying a list if there is more than one interface. d-i netcfg/choose_interface select auto # To pick a particular interface instead: #d-i netcfg/choose_interface select eth1 # If you have a slow dhcp server and the installer times out waiting for # it, this might be useful. d-i netcfg/dhcp_timeout string 120 # If you prefer to configure the network manually, uncomment this line and # the static network configuration below. #d-i netcfg/disable_autoconfig boolean true # If you want the preconfiguration file to work on systems both with and # without a dhcp server, uncomment these lines and the static network # configuration below. #d-i netcfg/dhcp_failed note #d-i netcfg/dhcp_options select Configure network manually # Static network configuration. #d-i netcfg/get_nameservers string 192.168.1.1 #d-i netcfg/get_ipaddress string 192.168.1.42 #d-i netcfg/get_netmask string 255.255.255.0 #d-i netcfg/get_gateway string 192.168.1.1 #d-i netcfg/confirm_static boolean true # Any hostname and domain names assigned from dhcp take precedence over # values set here. However, setting the values still prevents the questions # from being shown, even if values come from dhcp. d-i netcfg/get_hostname string stack d-i netcfg/get_domain string stackpass # Disable that annoying WEP key dialog. d-i netcfg/wireless_wep string # The wacky dhcp hostname that some ISPs use as a password of sorts. #d-i netcfg/dhcp_hostname string radish # If non-free firmware is needed for the network or other hardware, you can # configure the installer to always try to load it, without prompting. Or # change to false to disable asking. #d-i hw-detect/load_firmware boolean true ### Network console # Use the following settings if you wish to make use of the network-console # component for remote installation over SSH. This only makes sense if you # intend to perform the remainder of the installation manually. #d-i anna/choose_modules string network-console #d-i network-console/password password r00tme #d-i network-console/password-again password r00tme ### Mirror settings # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string squeeze # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string squeeze # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted ### Clock and time zone setup # Controls whether or not the hardware clock is set to UTC. d-i clock-setup/utc boolean true # You may set this to any valid setting for $TZ; see the contents of # /usr/share/zoneinfo/ for valid values. d-i time/zone string US/Pacific # Controls whether to use NTP to set the clock during the install d-i clock-setup/ntp boolean true # NTP server to use. The default is almost always fine here. d-i clock-setup/ntp-server string 0.us.pool.ntp.org ### Partitioning ## Partitioning example # If the system has free space you can choose to only partition that space. # This is only honoured if partman-auto/method (below) is not set. # Alternatives: custom, some_device, some_device_crypto, some_device_lvm. #d-i partman-auto/init_automatically_partition select biggest_free # Alternatively, you may specify a disk to partition. If the system has only # one disk the installer will default to using that, but otherwise the device # name must be given in traditional, non-devfs format (so e.g. /dev/hda or # /dev/sda, and not e.g. /dev/discs/disc0/disc). # For example, to use the first SCSI/SATA hard disk: #d-i partman-auto/disk string /dev/sda # In addition, you'll need to specify the method to use. # The presently available methods are: # - regular: use the usual partition types for your architecture # - lvm: use LVM to partition the disk # - crypto: use LVM within an encrypted partition d-i partman-auto/method string regular # If one of the disks that are going to be automatically partitioned # contains an old LVM configuration, the user will normally receive a # warning. This can be preseeded away... d-i partman-lvm/device_remove_lvm boolean true # The same applies to pre-existing software RAID array: d-i partman-md/device_remove_md boolean true # And the same goes for the confirmation to write the lvm partitions. d-i partman-lvm/confirm boolean true # For LVM partitioning, you can select how much of the volume group to use # for logical volumes. #d-i partman-auto-lvm/guided_size string max #d-i partman-auto-lvm/guided_size string 10GB #d-i partman-auto-lvm/guided_size string 50% # You can choose one of the three predefined partitioning recipes: # - atomic: all files in one partition # - home: separate /home partition # - multi: separate /home, /usr, /var, and /tmp partitions d-i partman-auto/choose_recipe select atomic # Or provide a recipe of your own... # If you have a way to get a recipe file into the d-i environment, you can # just point at it. #d-i partman-auto/expert_recipe_file string /hd-media/recipe # If not, you can put an entire recipe into the preconfiguration file in one # (logical) line. This example creates a small /boot partition, suitable # swap, and uses the rest of the space for the root partition: #d-i partman-auto/expert_recipe string \ # boot-root :: \ # 40 50 100 ext3 \ # $primary{ } $bootable{ } \ # method{ format } format{ } \ # use_filesystem{ } filesystem{ ext3 } \ # mountpoint{ /boot } \ # . \ # 500 10000 1000000000 ext3 \ # method{ format } format{ } \ # use_filesystem{ } filesystem{ ext3 } \ # mountpoint{ / } \ # . \ # 64 512 300% linux-swap \ # method{ swap } format{ } \ # . # If you just want to change the default filesystem from ext3 to something # else, you can do that without providing a full recipe. d-i partman/default_filesystem string ext3 # The full recipe format is documented in the file partman-auto-recipe.txt # included in the 'debian-installer' package or available from D-I source # repository. This also documents how to specify settings such as file # system labels, volume group names and which physical devices to include # in a volume group. # This makes partman automatically partition without confirmation, provided # that you told it what to do using one of the methods above. d-i partman-partitioning/confirm_write_new_label boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true d-i partman/confirm_nooverwrite boolean true ## Partitioning using RAID # The method should be set to "raid". #d-i partman-auto/method string raid # Specify the disks to be partitioned. They will all get the same layout, # so this will only work if the disks are the same size. #d-i partman-auto/disk string /dev/sda /dev/sdb # Next you need to specify the physical partitions that will be used. #d-i partman-auto/expert_recipe string \ # multiraid :: \ # 1000 5000 4000 raid \ # $primary{ } method{ raid } \ # . \ # 64 512 300% raid \ # method{ raid } \ # . \ # 500 10000 1000000000 raid \ # method{ raid } \ # . # Last you need to specify how the previously defined partitions will be # used in the RAID setup. Remember to use the correct partition numbers # for logical partitions. RAID levels 0, 1, 5, 6 and 10 are supported; # devices are separated using "#". # Parameters are: # \ # #d-i partman-auto-raid/recipe string \ # 1 2 0 ext3 / \ # /dev/sda1#/dev/sdb1 \ # . \ # 1 2 0 swap - \ # /dev/sda5#/dev/sdb5 \ # . \ # 0 2 0 ext3 /home \ # /dev/sda6#/dev/sdb6 \ # . # For additional information see the file partman-auto-raid-recipe.txt # included in the 'debian-installer' package or available from D-I source # repository. # This makes partman automatically partition without confirmation. d-i partman-md/confirm boolean true d-i partman-partitioning/confirm_write_new_label boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true d-i partman/confirm_nooverwrite boolean true ## Controlling how partitions are mounted # The default is to mount by UUID, but you can also choose "traditional" to # use traditional device names, or "label" to try filesystem labels before # falling back to UUIDs. #d-i partman/mount_style select uuid ### Base system installation # Configure APT to not install recommended packages by default. Use of this # option can result in an incomplete system and should only be used by very # experienced users. #d-i base-installer/install-recommends boolean false # The kernel image (meta) package to be installed; "none" can be used if no # kernel is to be installed. d-i base-installer/kernel/image string linux-virtual ### Account setup # Skip creation of a root account (normal user account will be able to # use sudo). The default is false; preseed this to true if you want to set # a root password. d-i passwd/root-login boolean true # Alternatively, to skip creation of a normal user account. d-i passwd/make-user boolean false # Root password, either in clear text d-i passwd/root-password password stackpass d-i passwd/root-password-again password stackpass # or encrypted using an MD5 hash. #d-i passwd/root-password-crypted password [MD5 hash] # To create a normal user account. #d-i passwd/user-fullname string Ubuntu User #d-i passwd/username string ubuntu # Normal user's password, either in clear text #d-i passwd/user-password password insecure #d-i passwd/user-password-again password insecure # or encrypted using an MD5 hash. #d-i passwd/user-password-crypted password [MD5 hash] # Create the first user with the specified UID instead of the default. #d-i passwd/user-uid string 1010 # The installer will warn about weak passwords. If you are sure you know # what you're doing and want to override it, uncomment this. d-i user-setup/allow-password-weak boolean true # The user account will be added to some standard initial groups. To # override that, use this. #d-i passwd/user-default-groups string audio cdrom video # Set to true if you want to encrypt the first user's home directory. d-i user-setup/encrypt-home boolean false ### Apt setup # You can choose to install restricted and universe software, or to install # software from the backports repository. d-i apt-setup/restricted boolean true d-i apt-setup/universe boolean true d-i apt-setup/backports boolean true # Uncomment this if you don't want to use a network mirror. #d-i apt-setup/use_mirror boolean false # Select which update services to use; define the mirrors to be used. # Values shown below are the normal defaults. #d-i apt-setup/services-select multiselect security #d-i apt-setup/security_host string security.ubuntu.com #d-i apt-setup/security_path string /ubuntu # Additional repositories, local[0-9] available #d-i apt-setup/local0/repository string \ # http://local.server/ubuntu squeeze main #d-i apt-setup/local0/comment string local server # Enable deb-src lines #d-i apt-setup/local0/source boolean true # URL to the public key of the local repository; you must provide a key or # apt will complain about the unauthenticated repository and so the # sources.list line will be left commented out #d-i apt-setup/local0/key string http://local.server/key # By default the installer requires that repositories be authenticated # using a known gpg key. This setting can be used to disable that # authentication. Warning: Insecure, not recommended. #d-i debian-installer/allow_unauthenticated boolean true ### Package selection #tasksel tasksel/first multiselect ubuntu-desktop #tasksel tasksel/first multiselect lamp-server, print-server #tasksel tasksel/first multiselect kubuntu-desktop tasksel tasksel/first multiselect openssh-server # Individual additional packages to install d-i pkgsel/include string cracklib-runtime curl wget ssh openssh-server tcpdump ethtool git sudo python-netaddr coreutils # Whether to upgrade packages after debootstrap. # Allowed values: none, safe-upgrade, full-upgrade d-i pkgsel/upgrade select safe-upgrade # Language pack selection #d-i pkgsel/language-packs multiselect de, en, zh # Policy for applying updates. May be "none" (no automatic updates), # "unattended-upgrades" (install security updates automatically), or # "landscape" (manage system with Landscape). d-i pkgsel/update-policy select unattended-upgrades # Some versions of the installer can report back on what software you have # installed, and what software you use. The default is not to report back, # but sending reports helps the project determine what software is most # popular and include it on CDs. #popularity-contest popularity-contest/participate boolean false # By default, the system's locate database will be updated after the # installer has finished installing most packages. This may take a while, so # if you don't want it, you can set this to "false" to turn it off. d-i pkgsel/updatedb boolean false ### Boot loader installation # Grub is the default boot loader (for x86). If you want lilo installed # instead, uncomment this: #d-i grub-installer/skip boolean true # To also skip installing lilo, and install no bootloader, uncomment this # too: #d-i lilo-installer/skip boolean true # With a few exceptions for unusual partitioning setups, GRUB 2 is now the # default. If you need GRUB Legacy for some particular reason, then # uncomment this: d-i grub-installer/grub2_instead_of_grub_legacy boolean false # This is fairly safe to set, it makes grub install automatically to the MBR # if no other operating system is detected on the machine. d-i grub-installer/only_debian boolean true # This one makes grub-installer install to the MBR if it also finds some other # OS, which is less safe as it might not be able to boot that other OS. d-i grub-installer/with_other_os boolean true # Alternatively, if you want to install to a location other than the mbr, # uncomment and edit these lines: #d-i grub-installer/only_debian boolean false #d-i grub-installer/with_other_os boolean false #d-i grub-installer/bootdev string (hd0,0) # To install grub to multiple disks: #d-i grub-installer/bootdev string (hd0,0) (hd1,0) (hd2,0) # Optional password for grub, either in clear text #d-i grub-installer/password password r00tme #d-i grub-installer/password-again password r00tme # or encrypted using an MD5 hash, see grub-md5-crypt(8). #d-i grub-installer/password-crypted password [MD5 hash] # Use the following option to add additional boot parameters for the # installed system (if supported by the bootloader installer). # Note: options passed to the installer will be added automatically. #d-i debian-installer/add-kernel-opts string nousb ### Finishing up the installation # During installations from serial console, the regular virtual consoles # (VT1-VT6) are normally disabled in /etc/inittab. Uncomment the next # line to prevent this. d-i finish-install/keep-consoles boolean true # Avoid that last message about the install being complete. d-i finish-install/reboot_in_progress note # This will prevent the installer from ejecting the CD during the reboot, # which is useful in some situations. #d-i cdrom-detect/eject boolean false # This is how to make the installer shutdown when finished, but not # reboot into the installed system. #d-i debian-installer/exit/halt boolean true # This will power off the machine instead of just halting it. #d-i debian-installer/exit/poweroff boolean true ### X configuration # X can detect the right driver for some cards, but if you're preseeding, # you override whatever it chooses. Still, vesa will work most places. #xserver-xorg xserver-xorg/config/device/driver select vesa # A caveat with mouse autodetection is that if it fails, X will retry it # over and over. So if it's preseeded to be done, there is a possibility of # an infinite loop if the mouse is not autodetected. #xserver-xorg xserver-xorg/autodetect_mouse boolean true # Monitor autodetection is recommended. xserver-xorg xserver-xorg/autodetect_monitor boolean true # Uncomment if you have an LCD display. #xserver-xorg xserver-xorg/config/monitor/lcd boolean true # X has three configuration paths for the monitor. Here's how to preseed # the "medium" path, which is always available. The "simple" path may not # be available, and the "advanced" path asks too many questions. xserver-xorg xserver-xorg/config/monitor/selection-method \ select medium xserver-xorg xserver-xorg/config/monitor/mode-list \ select 1024x768 @ 60 Hz ### Preseeding other packages # Depending on what software you choose to install, or if things go wrong # during the installation process, it's possible that other questions may # be asked. You can preseed those too, of course. To get a list of every # possible question that could be asked during an install, do an # installation, and then run these commands: # debconf-get-selections --installer > file # debconf-get-selections >> file #### Advanced options ### Running custom commands during the installation # d-i preseeding is inherently not secure. Nothing in the installer checks # for attempts at buffer overflows or other exploits of the values of a # preconfiguration file like this one. Only use preconfiguration files from # trusted locations! To drive that home, and because it's generally useful, # here's a way to run any shell command you'd like inside the installer, # automatically. # This first command is run as early as possible, just after # preseeding is read. #d-i preseed/early_command string anna-install some-udeb # This command is run immediately before the partitioner starts. It may be # useful to apply dynamic partitioner preseeding that depends on the state # of the disks (which may not be visible when preseed/early_command runs). #d-i partman/early_command \ # string debconf-set partman-auto/disk "$(list-devices disk | head -n1)" # This command is run just before the install finishes, but when there is # still a usable /target directory. You can chroot to /target and use it # directly, or use the apt-install and in-target commands to easily install # packages and run commands in the target system. d-i preseed/late_command string os-xenapi-0.3.1/tools/install/create_ubuntu_template.sh0000775000175000017500000001471013160424533024455 0ustar jenkinsjenkins00000000000000#!/bin/bash # This script must be run on a XenServer or XCP machine # # It creates a clean ubuntu VM template # # For more details see: README.md set -o errexit set -o nounset set -o xtrace export LC_ALL=C # directory settings THIS_DIR=$(cd $(dirname "$0") && pwd) SCRIPT_DIR="$THIS_DIR/scripts" COMM_DIR="$THIS_DIR/common" CONF_DIR="$THIS_DIR/conf" # Include onexit commands . $SCRIPT_DIR/on_exit.sh # xapi functions . $COMM_DIR/functions # Source params - override xenrc params in your local.conf to suit your taste source $CONF_DIR/xenrc # # Prepare Dom0 # including installing XenAPI plugins # cd $THIS_DIR # Check if multiple hosts listed if have_multiple_hosts; then cat >&2 << EOF Info: multiple hosts found. This might mean that the XenServer is a member of a pool. EOF fi # # Configure Networking # host_uuid=$(get_current_host_uuid) MGT_NETWORK=`xe pif-list management=true host-uuid=$host_uuid params=network-uuid minimal=true` HOST_MGT_BRIDGE_OR_NET_NAME=`xe network-list uuid=$MGT_NETWORK params=bridge minimal=true` setup_network "$HOST_MGT_BRIDGE_OR_NET_NAME" if ! xenapi_is_listening_on "$HOST_MGT_BRIDGE_OR_NET_NAME"; then cat >&2 << EOF ERROR: XenAPI does not have an assigned IP address on the management network. please review your XenServer network configuration file. EOF exit 1 fi HOST_IP=$(xenapi_ip_on "$HOST_MGT_BRIDGE_OR_NET_NAME") # Set up ip forwarding, but skip on xcp-xapi if [ -a /etc/sysconfig/network ]; then if ! grep -q "FORWARD_IPV4=YES" /etc/sysconfig/network; then # FIXME: This doesn't work on reboot! echo "FORWARD_IPV4=YES" >> /etc/sysconfig/network fi fi # Also, enable ip forwarding in rc.local, since the above trick isn't working if ! grep -q "echo 1 >/proc/sys/net/ipv4/ip_forward" /etc/rc.local; then echo "echo 1 >/proc/sys/net/ipv4/ip_forward" >> /etc/rc.local fi # Enable ip forwarding at runtime as well echo 1 > /proc/sys/net/ipv4/ip_forward # # Shutdown previous runs # DO_SHUTDOWN=${DO_SHUTDOWN:-1} CLEAN_TEMPLATES=${CLEAN_TEMPLATES:-false} if [ "$DO_SHUTDOWN" = "1" ]; then # Shutdown all VM's that created previously clean_templates_arg="" if $CLEAN_TEMPLATES; then clean_templates_arg="--remove-templates" fi $SCRIPT_DIR/uninstall-os-vpx.sh $clean_templates_arg # Destroy any instances that were launched for uuid in `xe vm-list resident-on=$host_uuid | grep -1 instance | grep uuid | sed "s/.*\: //g"`; do echo "Shutting down nova instance $uuid" xe vm-uninstall uuid=$uuid force=true done # Destroy orphaned vdis for uuid in `xe vdi-list | grep -1 Glance | grep uuid | sed "s/.*\: //g"`; do xe vdi-destroy uuid=$uuid done fi # # Create Ubuntu VM template # and/or create VM from template # templateuuid=$(get_template $TNAME $host_uuid) if [ -z "$templateuuid" ]; then # # Install Ubuntu over network # UBUNTU_INST_BRIDGE_OR_NET_NAME=${UBUNTU_INST_BRIDGE_OR_NET_NAME:-"$HOST_MGT_BRIDGE_OR_NET_NAME"} # always update the preseed file, incase we have a newer one PRESEED_URL=${PRESEED_URL:-""} if [ -z "$PRESEED_URL" ]; then PRESEED_URL="${HOST_IP}/ubuntupreseed.cfg" HTTP_SERVER_LOCATION="/opt/xensource/www" if [ ! -e $HTTP_SERVER_LOCATION ]; then HTTP_SERVER_LOCATION="/var/www/html" mkdir -p $HTTP_SERVER_LOCATION fi # Copy the tools DEB to the XS web server XS_TOOLS_URL="https://github.com/downloads/citrix-openstack/warehouse/xe-guest-utilities_5.6.100-651_amd64.deb" ISO_DIR="/opt/xensource/packages/iso" if [ -e "$ISO_DIR" ]; then TOOLS_ISO=$(ls -1 $ISO_DIR/*-tools-*.iso | head -1) TMP_DIR=/tmp/temp.$RANDOM mkdir -p $TMP_DIR mount -o loop $TOOLS_ISO $TMP_DIR # the target deb package maybe *amd64.deb or *all.deb, # so use *amd64.deb by default. If it doesn't exist, # then use *all.deb. DEB_FILE=$(ls $TMP_DIR/Linux/*amd64.deb || ls $TMP_DIR/Linux/*all.deb) cp $DEB_FILE $HTTP_SERVER_LOCATION umount $TMP_DIR rmdir $TMP_DIR XS_TOOLS_URL=${HOST_IP}/$(basename $DEB_FILE) fi cp -f $CONF_DIR/ubuntupreseed.cfg $HTTP_SERVER_LOCATION cp -f $SCRIPT_DIR/ubuntu_latecommand.sh $HTTP_SERVER_LOCATION/latecommand.sh sed \ -e "s,\(d-i mirror/http/hostname string\).*,\1 $UBUNTU_INST_HTTP_HOSTNAME,g" \ -e "s,\(d-i mirror/http/directory string\).*,\1 $UBUNTU_INST_HTTP_DIRECTORY,g" \ -e "s,\(d-i mirror/http/proxy string\).*,\1 $UBUNTU_INST_HTTP_PROXY,g" \ -e "s,\(d-i passwd/root-password password\).*,\1 $GUEST_PASSWORD,g" \ -e "s,\(d-i passwd/root-password-again password\).*,\1 $GUEST_PASSWORD,g" \ -e "s,\(d-i preseed/late_command string\).*,\1 in-target mkdir -p /tmp; in-target wget --no-proxy ${HOST_IP}/latecommand.sh -O /root/latecommand.sh; in-target bash /root/latecommand.sh,g" \ -i "${HTTP_SERVER_LOCATION}/ubuntupreseed.cfg" sed \ -e "s,@XS_TOOLS_URL@,$XS_TOOLS_URL,g" \ -i "${HTTP_SERVER_LOCATION}/latecommand.sh" fi # Update the template $SCRIPT_DIR/install_ubuntu_template.sh $PRESEED_URL # create a new VM from the given template with eth0 attached to the given # network $SCRIPT_DIR/install-os-vpx.sh \ -t "$UBUNTU_INST_TEMPLATE_NAME" \ -n "$UBUNTU_INST_BRIDGE_OR_NET_NAME" \ -l "$GUEST_NAME" set_vm_memory "$GUEST_NAME" "1024" xe vm-start vm="$GUEST_NAME" on=$host_uuid # wait for install to finish wait_for_VM_to_halt "$GUEST_NAME" # set VM to restart after a reboot vm_uuid=$(xe_min vm-list name-label="$GUEST_NAME") xe vm-param-set actions-after-reboot=Restart uuid="$vm_uuid" # Make template from VM snuuid=$(xe vm-snapshot vm="$GUEST_NAME" new-name-label="$SNAME_TEMPLATE") xe snapshot-clone uuid=$snuuid new-name-label="$TNAME" xe vm-uninstall vm="$GUEST_NAME" force=true else echo "the template has already exist" fi template_uuid=$(get_template "$TNAME" $host_uuid) exist_val=$(xe template-param-get uuid=$template_uuid param-name=PV-args) if [ -n "$exist_val" ]; then xe template-param-set uuid=$template_uuid PV-args="" fi set +x echo "################################################################################" echo "" echo "Template create done!" echo "################################################################################" os-xenapi-0.3.1/tools/install/devstack/0000775000175000017500000000000013160424745021164 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/tools/install/devstack/install_devstack.sh0000775000175000017500000002700413160424533025053 0ustar jenkinsjenkins00000000000000#!/bin/bash # This script is run by install_on_xen_host.sh # # It modifies the ubuntu image created by install_on_xen_host.sh # and previously moodified by prepare_guest_template.sh # # This script is responsible for: # - creates a DomU VM # - creating run.sh, to run the code on DomU boot # # by install_on_xen_host.sh # Exit on errors set -o errexit # Echo commands set -o xtrace # This directory THIS_DIR=$(cd $(dirname "$0") && pwd) TOP_DIR="$THIS_DIR/../" SCRIPT_DIR="$TOP_DIR/scripts" COMM_DIR="$TOP_DIR/common" CONF_DIR="$TOP_DIR/conf" # Include onexit commands . $SCRIPT_DIR/on_exit.sh # xapi functions . $COMM_DIR/functions # Source params source $CONF_DIR/xenrc # Defaults for optional arguments DEVSTACK_SRC=${DEVSTACK_SRC:-"https://github.com/openstack-dev/devstack"} LOGDIR="/opt/stack/devstack_logs" DISABLE_JOURNALING="false" # Number of options passed to this script REMAINING_OPTIONS="$#" # Get optional parameters set +e while getopts ":d:l:r" flag; do REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) case "$flag" in d) DEVSTACK_SRC="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; l) LOGDIR="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; r) DISABLE_JOURNALING="true" ;; \?) print_usage_and_die "Invalid option -$OPTARG" ;; esac done set -e # Make sure that all options processed if [ "0" != "$REMAINING_OPTIONS" ]; then print_usage_and_die "ERROR: some arguments were not recognised!" fi # # Prepare VM for DevStack # # # Configure Networking # host_uuid=$(get_current_host_uuid) MGT_NETWORK=`xe pif-list management=true host-uuid=$host_uuid params=network-uuid minimal=true` MGT_BRIDGE_OR_NET_NAME=`xe network-list uuid=$MGT_NETWORK params=bridge minimal=true` setup_network "$VM_BRIDGE_OR_NET_NAME" setup_network "$MGT_BRIDGE_OR_NET_NAME" setup_network "$PUB_BRIDGE_OR_NET_NAME" if parameter_is_specified "FLAT_NETWORK_BRIDGE"; then if [ "$(bridge_for "$VM_BRIDGE_OR_NET_NAME")" != "$(bridge_for "$FLAT_NETWORK_BRIDGE")" ]; then cat >&2 << EOF ERROR: FLAT_NETWORK_BRIDGE is specified in localrc file, and either no network found on XenServer by searching for networks by that value as name-label or bridge name or the network found does not match the network specified by VM_BRIDGE_OR_NET_NAME. Please check your localrc file. EOF exit 1 fi fi if ! xenapi_is_listening_on "$MGT_BRIDGE_OR_NET_NAME"; then cat >&2 << EOF ERROR: XenAPI does not have an assigned IP address on the management network. please review your XenServer network configuration / localrc file. EOF exit 1 fi HOST_IP=$(xenapi_ip_on "$MGT_BRIDGE_OR_NET_NAME") # Also, enable ip forwarding in rc.local, since the above trick isn't working if ! grep -q "echo 1 >/proc/sys/net/ipv4/ip_forward" /etc/rc.local; then echo "echo 1 >/proc/sys/net/ipv4/ip_forward" >> /etc/rc.local fi # Enable ip forwarding at runtime as well echo 1 > /proc/sys/net/ipv4/ip_forward HOST_IP=$(xenapi_ip_on "$MGT_BRIDGE_OR_NET_NAME") #install the previous ubuntu VM vm_exist=$(xe vm-list name-label="$DEV_STACK_DOMU_NAME" --minimal) if [ "$vm_exist" != "" ] then echo "Uninstall the previous VM" xe vm-uninstall vm="$DEV_STACK_DOMU_NAME" force=true fi echo "Install a new ubuntu VM according to previous template" vm_uuid=$(xe vm-install template="$TNAME" new-name-label="$DEV_STACK_DOMU_NAME") xe vm-param-set other-config:os-vpx=true uuid="$vm_uuid" # Install XenServer tools, and other such things $SCRIPT_DIR/prepare_guest_template.sh "$DEV_STACK_DOMU_NAME" # Set virtual machine parameters set_vm_memory "$DEV_STACK_DOMU_NAME" "$VM_MEM_MB" # Max out VCPU count for better performance max_vcpus "$DEV_STACK_DOMU_NAME" # Wipe out all network cards destroy_all_vifs_of "$DEV_STACK_DOMU_NAME" # Add only one interface to prepare the guest template add_interface "$DEV_STACK_DOMU_NAME" "$MGT_BRIDGE_OR_NET_NAME" "0" # start the VM to run the prepare steps xe vm-start vm="$DEV_STACK_DOMU_NAME" on=$host_uuid # Wait for prep script to finish and shutdown system wait_for_VM_to_halt "$DEV_STACK_DOMU_NAME" ## Setup network cards # Wipe out all destroy_all_vifs_of "$DEV_STACK_DOMU_NAME" # Tenant network add_interface "$DEV_STACK_DOMU_NAME" "$VM_BRIDGE_OR_NET_NAME" "$VM_DEV_NR" # Management network add_interface "$DEV_STACK_DOMU_NAME" "$MGT_BRIDGE_OR_NET_NAME" "$MGT_DEV_NR" # Public network add_interface "$DEV_STACK_DOMU_NAME" "$PUB_BRIDGE_OR_NET_NAME" "$PUB_DEV_NR" # # persistant the VM's interfaces # $SCRIPT_DIR/persist_domU_interfaces.sh "$DEV_STACK_DOMU_NAME" FLAT_NETWORK_BRIDGE="${FLAT_NETWORK_BRIDGE:-$(bridge_for "$VM_BRIDGE_OR_NET_NAME")}" append_kernel_cmdline "$DEV_STACK_DOMU_NAME" "flat_network_bridge=${FLAT_NETWORK_BRIDGE}" # Disable FS journaling. It would reduce disk IO, but may lead to file system # unstable after long time use if [ "$DISABLE_JOURNALING" = "true" ]; then vm_vbd=$(xe vbd-list vm-name-label=$DEV_STACK_DOMU_NAME --minimal) vm_vdi=$(xe vdi-list vbd-uuids=$vm_vbd --minimal) dom_zero_uuid=$(xe vm-list dom-id=0 resident-on=$host_uuid --minimal) tmp_vbd=$(xe vbd-create device=autodetect bootable=false mode=RW type=Disk vdi-uuid=$vm_vdi vm-uuid=$dom_zero_uuid) xe vbd-plug uuid=$tmp_vbd sr_id=$(get_local_sr) kpartx -p p -avs /dev/sm/backend/$sr_id/$vm_vdi echo "********Before disable FS journaling********" tune2fs -l /dev/mapper/${vm_vdi}p1 | grep "Filesystem features" echo "********Disable FS journaling********" tune2fs -O ^has_journal /dev/mapper/${vm_vdi}p1 echo "********After disable FS journaling********" tune2fs -l /dev/mapper/${vm_vdi}p1 | grep "Filesystem features" kpartx -p p -dvs /dev/sm/backend/$sr_id/$vm_vdi xe vbd-unplug uuid=$tmp_vbd timeout=60 xe vbd-destroy uuid=$tmp_vbd fi # Add a separate xvdb, if it was requested if [[ "0" != "$XEN_XVDB_SIZE_GB" ]]; then vm=$(xe vm-list name-label="$DEV_STACK_DOMU_NAME" --minimal) # Add a new disk localsr=$(get_local_sr) extra_vdi=$(xe vdi-create \ name-label=xvdb-added-by-devstack \ virtual-size="${XEN_XVDB_SIZE_GB}GiB" \ sr-uuid=$localsr type=user) xe vbd-create vm-uuid=$vm vdi-uuid=$extra_vdi device=1 fi # # Run DevStack VM # xe vm-start vm="$DEV_STACK_DOMU_NAME" on=$host_uuid # Get hold of the Management IP of OpenStack VM OS_VM_MANAGEMENT_ADDRESS=$MGT_IP if [ $OS_VM_MANAGEMENT_ADDRESS == "dhcp" ]; then OS_VM_MANAGEMENT_ADDRESS=$(find_ip_by_name $DEV_STACK_DOMU_NAME $MGT_DEV_NR) fi # Create an ssh-keypair, and set it up for dom0 user rm -f /root/dom0key /root/dom0key.pub ssh-keygen -f /root/dom0key -P "" -C "dom0" DOMID=$(get_domid "$DEV_STACK_DOMU_NAME") xenstore-write /local/domain/$DOMID/authorized_keys/$DOMZERO_USER "$(cat /root/dom0key.pub)" xenstore-chmod -u /local/domain/$DOMID/authorized_keys/$DOMZERO_USER r$DOMID function run_on_appliance { ssh \ -i /root/dom0key \ -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no \ -o BatchMode=yes \ "$DOMZERO_USER@$OS_VM_MANAGEMENT_ADDRESS" "$@" } # Wait until we can log in to the appliance while ! run_on_appliance true; do sleep 1 done # Remove authenticated_keys updater cronjob echo "" | run_on_appliance crontab - # Generate a passwordless ssh key for domzero user echo "ssh-keygen -f /home/$DOMZERO_USER/.ssh/id_rsa -C $DOMZERO_USER@appliance -N \"\" -q" | run_on_appliance # Authenticate that user to dom0 run_on_appliance cat /home/$DOMZERO_USER/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys set +x echo "################################################################################" echo "" echo "VM configuration done!" echo "################################################################################" xe vm-shutdown vm="$DEV_STACK_DOMU_NAME" wait_for_VM_to_halt "$DEV_STACK_DOMU_NAME" # # Mount the VDI # echo "check vdi mapping" STAGING_DIR=$($SCRIPT_DIR/manage-vdi open $DEV_STACK_DOMU_NAME 0 1 | grep -o "/tmp/tmp.[[:alnum:]]*") add_on_exit "$SCRIPT_DIR/manage-vdi close $DEV_STACK_DOMU_NAME 0 1" # Make sure we have a stage if [ ! -d $STAGING_DIR/etc ]; then echo "ERROR:ct properly set up!" exit 1 fi if [ ! -d "$STAGING_DIR/opt/stack" ]; then echo "ERROR: scet" exit -1 fi rm -f $STAGING_DIR/opt/stack/local.conf pif=$(xe pif-list management=true host-uuid=$host_uuid --minimal) XENSERVER_IP=$(xe pif-param-get param-name=IP uuid=$pif) # Create an systemd task for devstack cat >$STAGING_DIR/etc/systemd/system/devstack.service << EOF [Unit] Description=Install OpenStack by DevStack [Service] Type=oneshot RemainAfterExit=yes ExecStartPre=/bin/rm -f /opt/stack/runsh.succeeded ExecStart=/bin/su -c "/opt/stack/run.sh" stack StandardOutput=tty StandardError=tty [Install] WantedBy=multi-user.target EOF if [ $? -ne 0 ]; then echo "fatal error, install service failed." exit 1 fi # enable this service rm -f $STAGING_DIR/etc/systemd/system/multi-user.target.wants/devstack.service ln -s /etc/systemd/system/devstack.service $STAGING_DIR/etc/systemd/system/multi-user.target.wants/devstack.service # Gracefully cp only if source file/dir exists function cp_it { if [ -e $1 ] || [ -d $1 ]; then cp -pRL $1 $2 fi } # Copy over your ssh keys and env if desired cp_it ~/.ssh $STAGING_DIR/opt/stack/.ssh cp_it ~/.ssh/id_rsa.pub $STAGING_DIR/opt/stack/.ssh/authorized_keys cp_it ~/.gitconfig $STAGING_DIR/opt/stack/.gitconfig cp_it ~/.vimrc $STAGING_DIR/opt/stack/.vimrc cp_it ~/.bashrc $STAGING_DIR/opt/stack/.bashrc if [ -d $DEVSTACK_SRC ]; then # Local repository for devstack exist, copy it to DomU cp_it $DEVSTACK_SRC $STAGING_DIR/opt/stack/ fi # Journald default is to not persist logs to disk if /var/log/journal is # not present. Update the configuration to set storage to persistent which # will create /var/log/journal if necessary and store logs on disk. This # avoids the situation where test runs can fill the journald ring buffer # deleting older logs that may be important to the job. JOURNALD_CFG=$STAGING_DIR/etc/systemd/journald.conf if [ -f $JOURNALD_CFG ] ; then sed -i -e 's/#Storage=auto/Storage=persistent/' $JOURNALD_CFG fi # Configure run.sh DOMU_STACK_DIR=/opt/stack DOMU_DEV_STACK_DIR=$DOMU_STACK_DIR/devstack cat <$STAGING_DIR/opt/stack/run.sh #!/bin/bash set -eux ( flock -n 9 || exit 1 sudo chown -R stack $DOMU_STACK_DIR cd $DOMU_STACK_DIR [ -e /opt/stack/runsh.succeeded ] && rm /opt/stack/runsh.succeeded echo \$\$ >> /opt/stack/run_sh.pid if [ ! -d $DOMU_DEV_STACK_DIR ]; then echo "Can not find the devstack source code, get it from git." git clone $DEVSTACK_SRC $DOMU_DEV_STACK_DIR fi cp $DOMU_STACK_DIR/local.conf $DOMU_DEV_STACK_DIR/ cd $DOMU_DEV_STACK_DIR ./unstack.sh || true ./stack.sh # Got to the end - success touch /opt/stack/runsh.succeeded # Update /etc/issue ( echo "OpenStack VM - Installed by DevStack" IPADDR=$(ip -4 address show eth0 | sed -n 's/.*inet \([0-9\.]\+\).*/\1/p') echo " Management IP: $IPADDR" echo -n " Devstack run: " if [ -e /opt/stack/runsh.succeeded ]; then echo "SUCCEEDED" else echo "FAILED" fi echo "" ) > /opt/stack/issue sudo cp /opt/stack/issue /etc/issue rm /opt/stack/run_sh.pid ) 9> /opt/stack/.runsh_lock EOF chmod 755 $STAGING_DIR/opt/stack/run.sh if [ ! -f $TOP_DIR/local.conf ]; then echo "ERROR: You should prepare a local.conf and put it under $TOP_DIR" exit 1 fi cp_it $TOP_DIR/local.conf $STAGING_DIR/opt/stack/local.conf cp_it $THIS_DIR/run.sh $STAGING_DIR/opt/stack/run.sh os-xenapi-0.3.1/tools/install/common/0000775000175000017500000000000013160424745020650 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/tools/install/common/functions0000775000175000017500000002260113160424533022602 0ustar jenkinsjenkins00000000000000#!/bin/bash xe_min() { local cmd="$1" shift xe "$cmd" --minimal "$@" } function die_with_error { local err_msg err_msg="$1" echo "$err_msg" >&2 exit 1 } function get_local_sr { xe pool-list params=default-SR minimal=true } function find_ip_by_name { local guest_name="$1" local interface="$2" local period=10 local max_tries=10 local i=0 while true; do if [ $i -ge $max_tries ]; then echo "Timeout: ip address for interface $interface of $guest_name" exit 11 fi ipaddress=$(xe vm-list --minimal \ name-label=$guest_name \ params=networks | sed -ne "s,^.*${interface}/ip: \([0-9.]*\).*\$,\1,p") if [ -z "$ipaddress" ]; then sleep $period i=$((i+1)) else echo $ipaddress break fi done } function _vm_uuid { local vm_name_label vm_name_label="$1" xe vm-list name-label="$vm_name_label" --minimal } function _create_new_network { local name_label name_label=$1 uuid=$(xe network-create name-label="$name_label") xe network-param-add uuid=$uuid param-name=other-config assume_network_is_shared='true' } function _multiple_networks_with_name { local name_label name_label=$1 # A comma indicates multiple matches xe network-list name-label="$name_label" --minimal | grep -q "," } function _network_exists { local name_label name_label=$1 ! [ -z "$(xe network-list name-label="$name_label" --minimal)" ] } function _bridge_exists { local bridge bridge=$1 ! [ -z "$(xe network-list bridge="$bridge" --minimal)" ] } function _network_uuid { local bridge_or_net_name bridge_or_net_name=$1 if _bridge_exists "$bridge_or_net_name"; then xe network-list bridge="$bridge_or_net_name" --minimal else xe network-list name-label="$bridge_or_net_name" --minimal fi } function add_interface { local vm_name_label local bridge_or_network_name vm_name_label="$1" bridge_or_network_name="$2" device_number="$3" local vm local net vm=$(_vm_uuid "$vm_name_label") net=$(_network_uuid "$bridge_or_network_name") xe vif-create network-uuid=$net vm-uuid=$vm device=$device_number } function setup_network { local bridge_or_net_name bridge_or_net_name=$1 if ! _bridge_exists "$bridge_or_net_name"; then if _network_exists "$bridge_or_net_name"; then if _multiple_networks_with_name "$bridge_or_net_name"; then cat >&2 << EOF ERROR: Multiple networks found matching name-label to "$bridge_or_net_name" please review your XenServer network configuration / localrc file. EOF exit 1 fi else _create_new_network "$bridge_or_net_name" fi fi } function bridge_for { local bridge_or_net_name bridge_or_net_name=$1 if _bridge_exists "$bridge_or_net_name"; then echo "$bridge_or_net_name" else xe network-list name-label="$bridge_or_net_name" params=bridge --minimal fi } function xenapi_ip_on { local bridge_or_net_name bridge_or_net_name=$1 ip -4 addr show $(bridge_for "$bridge_or_net_name") |\ awk '/inet/{split($2, ip, "/"); print ip[1];}' } function xenapi_is_listening_on { local bridge_or_net_name bridge_or_net_name=$1 ! [ -z $(xenapi_ip_on "$bridge_or_net_name") ] } function parameter_is_specified { local parameter_name parameter_name=$1 compgen -v | grep "$parameter_name" } function append_kernel_cmdline { local vm_name_label local kernel_args vm_name_label="$1" kernel_args="$2" local vm local pv_args vm=$(_vm_uuid "$vm_name_label") pv_args=$(xe vm-param-get param-name=PV-args uuid=$vm) xe vm-param-set PV-args="$pv_args $kernel_args" uuid=$vm } function destroy_all_vifs_of { local vm_name_label vm_name_label="$1" local vm vm=$(_vm_uuid "$vm_name_label") IFS=, for vif in $(xe vif-list vm-uuid=$vm --minimal); do xe vif-destroy uuid="$vif" done unset IFS } function have_multiple_hosts { xe host-list --minimal | grep -q "," } function get_current_host_uuid { source /etc/xensource-inventory; echo $INSTALLATION_UUID } function get_current_dom0_uuid { source /etc/xensource-inventory; echo $CONTROL_DOMAIN_UUID } function attach_network { local bridge_or_net_name bridge_or_net_name="$1" local net local host net=$(_network_uuid "$bridge_or_net_name") host=$(get_current_host_uuid) xe network-attach uuid=$net host-uuid=$host } function set_vm_memory { local vm_name_label local memory vm_name_label="$1" memory="$2" local vm vm=$(_vm_uuid "$vm_name_label") xe vm-memory-limits-set \ static-min=${memory}MiB \ static-max=${memory}MiB \ dynamic-min=${memory}MiB \ dynamic-max=${memory}MiB \ uuid=$vm } function set_vm_disk { local vm_name_label local vm_disk_size vm_name_label="$1" vm_disk_size="$2" local vm_uuid local vm_vbd local vm_vdi vm_uuid=$(xe vm-list name-label=$vm_name_label --minimal) vm_vbd=$(xe vbd-list vm-uuid=$vm_uuid device=xvda --minimal) vm_vdi=$(xe vdi-list vbd-uuids=$vm_vbd --minimal) xe vdi-resize uuid=$vm_vdi disk-size=$((vm_disk_size * 1024 * 1024 * 1024)) } function max_vcpus { local vm_name_label vm_name_label="$1" local vm local host local cpu_count host=$(get_current_host_uuid) vm=$(_vm_uuid "$vm_name_label") cpu_count=$(xe host-param-get \ param-name=cpu_info \ uuid=$host | sed -e 's/^.*cpu_count: \([0-9]*\);.*$/\1/g') if [ -z "$cpu_count" ]; then # get dom0's vcpu count cpu_count=$(cat /proc/cpuinfo | grep processor | wc -l) fi # Assert cpu_count is not empty [ -n "$cpu_count" ] # Assert ithas a numeric nonzero value expr "$cpu_count" + 0 # 8 VCPUs should be enough for devstack VM; avoid using too # many VCPUs: # 1. too many VCPUs may trigger a kernel bug which result VM # not able to boot: # https://kernel.googlesource.com/pub/scm/linux/kernel/git/wsa/linux/+/e2e004acc7cbe3c531e752a270a74e95cde3ea48 # 2. The remaining CPUs can be used for other purpose: # e.g. boot test VMs. MAX_VCPUS=8 if [ $cpu_count -ge $MAX_VCPUS ]; then cpu_count=$MAX_VCPUS fi xe vm-param-set uuid=$vm VCPUs-max=$cpu_count xe vm-param-set uuid=$vm VCPUs-at-startup=$cpu_count } function get_template { local tmp_name="$1" local host="$2" tmp=$(xe template-list name-label="$tmp_name" --minimal) if [[ $tmp == *","* ]]; then tmp_group=tmp tmp=$(xe template-list name-label="$tmp_name" \ possible-hosts=$host --minimal) # Current host have no template if [[ -z $tmp ]]; then tmp=${tmp_group##*,} fi fi echo $tmp } function clean_template_other_conf { local tmp_name="$1" tmp=$(xe template-list name-label="$tmp_name" --minimal) if [ -n "\$tmp" ]; then echo " $tmp clean other configure" IFS=',' for i in "\${tmp[@]}"; do xe template-param-clear param-name=other-config uuid=$i > /dev/null done fi } function uninstall_template { local tmp_name="$1" tmp=$(xe template-list name-label="$tmp_name" --minimal) if [ -n "\$tmp" ]; then echo " $tmp already exist, uninstalling" IFS=',' for i in "\${tmp[@]}"; do xe template-uninstall template-uuid="\$i" force=true > /dev/null done fi } function get_domid { local vm_name_label vm_name_label="$1" xe vm-list name-label="$vm_name_label" params=dom-id minimal=true } function install_conntrack_tools { local xs_host local xs_ver_major local centos_ver local conntrack_conf xs_host=$(get_current_host_uuid) xs_ver_major=$(xe host-param-get uuid=$xs_host param-name=software-version param-key=product_version_text_short | cut -d'.' -f 1) if [ $xs_ver_major -gt 6 ]; then # Only support conntrack-tools in Dom0 with XS7.0 and above if [ ! -f /usr/sbin/conntrackd ]; then sed -i s/#baseurl=/baseurl=/g /etc/yum.repos.d/CentOS-Base.repo centos_ver=$(yum version nogroups |grep Installed | cut -d' ' -f 2 | cut -d'/' -f 1 | cut -d'-' -f 1) yum install -y --enablerepo=base --releasever=$centos_ver conntrack-tools # Backup conntrackd.conf after install conntrack-tools, use the one with statistic mode mv /etc/conntrackd/conntrackd.conf /etc/conntrackd/conntrackd.conf.back conntrack_conf=$(find /usr/share/doc -name conntrackd.conf |grep stats) cp $conntrack_conf /etc/conntrackd/conntrackd.conf fi service conntrackd restart fi } # # Wait for VM to halt # function wait_for_VM_to_halt { #mgmt_ip="$1" GUEST_VM_NAME="$1" set +x echo "Waiting for the VM to halt. Progress in-VM can be checked with XenCenter or xl console:" domid=$(get_domid "$GUEST_VM_NAME") echo "ssh root@host \"xl console $domid\"" while true; do state=$(xe_min vm-list name-label="$GUEST_VM_NAME" power-state=halted) if [ -n "$state" ]; then break else echo -n "." sleep 20 fi done set -x } os-xenapi-0.3.1/tools/install/scripts/0000775000175000017500000000000013160424745021047 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/tools/install/scripts/prepare_guest_template.sh0000775000175000017500000000774313160424533026154 0ustar jenkinsjenkins00000000000000#!/bin/bash # This script is run by install_os_domU.sh # # Parameters: # - $GUEST_NAME - hostname for the DomU VM # # It modifies the ubuntu image created by install_os_domU.sh # # This script is responsible for cusomtizing the fresh ubuntu # image so on boot it runs the prepare_guest.sh script # that modifies the VM so it is ready to run stack.sh. # It does this by mounting the disk image of the VM. # # The resultant image is started by install_os_domU.sh, # and once the VM has shutdown, build_xva.sh is run set -o errexit set -o nounset set -o xtrace # This directory THIS_DIR=$(cd $(dirname "$0") && pwd) TOP_DIR="$THIS_DIR/../" SCRIPT_DIR="$TOP_DIR/scripts" COMM_DIR="$TOP_DIR/common" CONF_DIR="$TOP_DIR/conf" # Include onexit commands . $SCRIPT_DIR/on_exit.sh # xapi functions . $COMM_DIR/functions # Source params source $CONF_DIR/xenrc # # Parameters # GUEST_NAME="$1" # Mount the VDI STAGING_DIR=$($TOP_DIR/scripts/manage-vdi open $GUEST_NAME 0 1 | grep -o "/tmp/tmp.[[:alnum:]]*") add_on_exit "$TOP_DIR/scripts/manage-vdi close $GUEST_NAME 0 1" # Make sure we have a stage if [ ! -d $STAGING_DIR/etc ]; then echo "Stage is not properly set up!" exit 1 fi # Copy prepare_guest.sh to VM mkdir -p $STAGING_DIR/opt/stack/ cp $SCRIPT_DIR/prepare_guest.sh $STAGING_DIR/opt/stack/prepare_guest.sh # backup rc.local cp $STAGING_DIR/etc/rc.local $STAGING_DIR/etc/rc.local.preparebackup echo "$STAGING_DIR/etc/rc.local" # run prepare_guest.sh on boot cat <$STAGING_DIR/etc/rc.local #!/bin/sh -e bash /opt/stack/prepare_guest.sh \\ "$GUEST_PASSWORD" "$STACK_USER" "$DOMZERO_USER" \\ > /opt/stack/prepare_guest.log 2>&1 EOF echo "$STAGING_DIR/etc/apt/sources.list" # Update ubuntu repositories cat > $STAGING_DIR/etc/apt/sources.list << EOF deb http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE} main restricted deb-src http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE} main restricted deb http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-updates main restricted deb-src http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-updates main restricted deb http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE} universe deb-src http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE} universe deb http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-updates universe deb-src http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-updates universe deb http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE} multiverse deb-src http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE} multiverse deb http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-updates multiverse deb-src http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-updates multiverse deb http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-backports main restricted universe multiverse deb-src http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY} ${UBUNTU_INST_RELEASE}-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu ${UBUNTU_INST_RELEASE}-security main restricted deb-src http://security.ubuntu.com/ubuntu ${UBUNTU_INST_RELEASE}-security main restricted deb http://security.ubuntu.com/ubuntu ${UBUNTU_INST_RELEASE}-security universe deb-src http://security.ubuntu.com/ubuntu ${UBUNTU_INST_RELEASE}-security universe deb http://security.ubuntu.com/ubuntu ${UBUNTU_INST_RELEASE}-security multiverse deb-src http://security.ubuntu.com/ubuntu ${UBUNTU_INST_RELEASE}-security multiverse EOF rm -f $STAGING_DIR/etc/apt/apt.conf if [ -n "$UBUNTU_INST_HTTP_PROXY" ]; then cat > $STAGING_DIR/etc/apt/apt.conf << EOF Acquire::http::Proxy "$UBUNTU_INST_HTTP_PROXY"; EOF fi os-xenapi-0.3.1/tools/install/scripts/uninstall-os-vpx.sh0000775000175000017500000000464013160424533024650 0ustar jenkinsjenkins00000000000000#!/bin/bash # # Copyright (c) 2011 Citrix Systems, Inc. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # set -ex THIS_DIR=$(cd $(dirname "$0") && pwd) COMM_DIR="$THIS_DIR/../common" # xapi functions . $COMM_DIR/functions # By default, don't remove the templates REMOVE_TEMPLATES=${REMOVE_TEMPLATES:-"false"} if [ "$1" = "--remove-templates" ]; then REMOVE_TEMPLATES=true fi xe_min() { local cmd="$1" shift xe "$cmd" --minimal "$@" } destroy_vdi() { local vbd_uuid="$1" local type type=$(xe_min vbd-list uuid=$vbd_uuid params=type) local dev dev=$(xe_min vbd-list uuid=$vbd_uuid params=userdevice) local vdi_uuid vdi_uuid=$(xe_min vbd-list uuid=$vbd_uuid params=vdi-uuid) if [ "$type" == 'Disk' ] && [ "$dev" != 'xvda' ] && [ "$dev" != '0' ]; then xe vdi-destroy uuid=$vdi_uuid fi } uninstall() { local vm_uuid="$1" local power_state power_state=$(xe_min vm-list uuid=$vm_uuid params=power-state) if [ "$power_state" != "halted" ]; then xe vm-shutdown vm=$vm_uuid force=true fi for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do destroy_vdi "$v" done xe vm-uninstall vm=$vm_uuid force=true >/dev/null } uninstall_template() { local vm_uuid="$1" for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do destroy_vdi "$v" done xe template-uninstall template-uuid=$vm_uuid force=true >/dev/null } host=$(get_current_host_uuid) # remove the VMs and their disks on this host for u in $(xe_min vm-list resident-on=$host other-config:os-vpx=true | sed -e 's/,/ /g'); do uninstall "$u" done # remove the templates if [ "$REMOVE_TEMPLATES" == "true" ]; then for u in $(xe_min template-list possible-hosts=$host other-config:os-vpx=true | sed -e 's/,/ /g'); do uninstall_template "$u" done fi os-xenapi-0.3.1/tools/install/scripts/ubuntu_latecommand.sh0000775000175000017500000000070013160424533025264 0ustar jenkinsjenkins00000000000000#!/bin/bash set -eux # Need to set barrier=0 to avoid a Xen bug # https://bugs.launchpad.net/ubuntu/+source/linux/+bug/824089 sed -i -e 's/errors=/barrier=0,errors=/' /etc/fstab # Allow root to login with a password sed -i -e 's/.*PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config # Install the XenServer tools so IP addresses are reported wget --no-proxy @XS_TOOLS_URL@ -O /root/tools.deb dpkg -i /root/tools.deb rm /root/tools.deb os-xenapi-0.3.1/tools/install/scripts/prepare_guest.sh0000775000175000017500000000605313160424533024252 0ustar jenkinsjenkins00000000000000#!/bin/bash # This script is run on an Ubuntu VM. # This script is inserted into the VM by prepare_guest_template.sh # and is run when that VM boots. # It customizes a fresh Ubuntu install, so it is ready # to run stack.sh # # creating the user called "stack", # and shuts down the VM to signal the script has completed set -o errexit set -o nounset set -o xtrace # Configurable nuggets GUEST_PASSWORD="$1" STACK_USER="$2" DOMZERO_USER="$3" function setup_domzero_user { local username username="$1" local key_updater_script local sudoers_file key_updater_script="/home/$username/update_authorized_keys.sh" sudoers_file="/etc/sudoers.d/allow_$username" # Create user adduser --disabled-password --quiet "$username" --gecos "$username" # Give passwordless sudo cat > $sudoers_file << EOF $username ALL = NOPASSWD: ALL EOF chmod 0440 $sudoers_file # A script to populate this user's authenticated_keys from xenstore cat > $key_updater_script << EOF #!/bin/bash set -eux DOMID=\$(sudo xenstore-read domid) sudo xenstore-exists /local/domain/\$DOMID/authorized_keys/$username sudo xenstore-read /local/domain/\$DOMID/authorized_keys/$username > /home/$username/xenstore_value cat /home/$username/xenstore_value > /home/$username/.ssh/authorized_keys EOF # Give the key updater to the user chown $username:$username $key_updater_script chmod 0700 $key_updater_script # Setup the .ssh folder mkdir -p /home/$username/.ssh chown $username:$username /home/$username/.ssh chmod 0700 /home/$username/.ssh touch /home/$username/.ssh/authorized_keys chown $username:$username /home/$username/.ssh/authorized_keys chmod 0600 /home/$username/.ssh/authorized_keys # Setup the key updater as a cron job crontab -u $username - << EOF * * * * * $key_updater_script EOF } # Make a small cracklib dictionary, so that passwd still works, but we don't # have the big dictionary. mkdir -p /usr/share/cracklib echo a | cracklib-packer # Make /etc/shadow, and set the root password pwconv echo "root:$GUEST_PASSWORD" | chpasswd # Put the VPX into UTC. rm -f /etc/localtime # Add stack user groupadd libvirtd useradd $STACK_USER -s /bin/bash -d /opt/stack -G libvirtd echo $STACK_USER:$GUEST_PASSWORD | chpasswd echo "$STACK_USER ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers setup_domzero_user "$DOMZERO_USER" # Add an udev rule, so that new block devices could be written by stack user cat > /etc/udev/rules.d/50-openstack-blockdev.rules << EOF KERNEL=="xvd[b-z]", GROUP="$STACK_USER", MODE="0660" EOF # Give ownership of /opt/stack to stack user chown -R $STACK_USER /opt/stack function setup_vimrc { if [ ! -e $1 ]; then # Simple but usable vimrc cat > $1 < /dev/null || true mapping=$(kpartx -av "/dev/$dev" | sed -ne 's,^add map \([a-z0-9\-]*\).*$,\1,p' | sed -ne "s,^\(.*${part}\)\$,\1,p") if [ -z "$mapping" ]; then echo "Failed to find mapping" exit -1 fi local device="/dev/mapper/${mapping}" for (( i = 0; i < 5; i++ )) ; do if [ -b $device ] ; then echo $device return fi sleep 1 done echo "ERROR: timed out waiting for dev-mapper" exit 1 else echo "/dev/$dev$part" fi } function clean_dev_mappings() { dev=$(xe_min vbd-list params=device uuid="$vbd_uuid") if [[ "$dev" =~ "sm/" || "$dev" =~ "blktap-2/" ]]; then kpartx -dv "/dev/$dev" fi } function open_vdi() { vbd_uuid=$(xe vbd-create vm-uuid="$dom0_uuid" vdi-uuid="$vdi_uuid" \ device=autodetect) mp=$(mktemp -d) xe vbd-plug uuid="$vbd_uuid" run_udev_settle mount_device=$(get_mount_device "$vbd_uuid") mount "$mount_device" "$mp" echo "Your vdi is mounted at $mp" } function close_vdi() { vbd_uuid=$(xe_min vbd-list vm-uuid="$dom0_uuid" vdi-uuid="$vdi_uuid") mount_device=$(get_mount_device "$vbd_uuid") run_udev_settle umount "$mount_device" clean_dev_mappings xe vbd-unplug uuid=$vbd_uuid xe vbd-destroy uuid=$vbd_uuid } if [ "$action" == "open" ]; then open_vdi elif [ "$action" == "close" ]; then close_vdi fi os-xenapi-0.3.1/tools/install/scripts/persist_domU_interfaces.sh0000775000175000017500000000465713160424533026275 0ustar jenkinsjenkins00000000000000#!/bin/bash # This script is run by config_devstack_domu_vm.sh # # It modifies the ubuntu image created by config_devstack_domu_vm.sh # and previously moodified by prepare_guest_template.sh # # This script is responsible for: # - pushing in the DevStack code # It does this by mounting the disk image of the VM. # # The resultant image is then templated and started # by config_devstack_domu_vm.sh # Exit on errors set -o errexit # Echo commands set -o xtrace # This directory THIS_DIR=$(cd $(dirname "$0") && pwd) TOP_DIR="$THIS_DIR/../" SCRIPT_DIR="$TOP_DIR/scripts" COMM_DIR="$TOP_DIR/common" CONF_DIR="$TOP_DIR/conf" # Include onexit commands . $SCRIPT_DIR/on_exit.sh # xapi functions . $COMM_DIR/functions # Source params source $CONF_DIR/xenrc # # Parameters # GUEST_NAME="$1" function _print_interface_config { local device_nr local ip_address local netmask device_nr="$1" ip_address="$2" netmask="$3" local device device="eth${device_nr}" echo "auto $device" if [ "$ip_address" = "dhcp" ]; then echo "iface $device inet dhcp" else echo "iface $device inet static" echo " address $ip_address" echo " netmask $netmask" fi # Turn off tx checksumming for better performance echo " post-up ethtool -K $device tx off" } function print_interfaces_config { echo "auto lo" echo "iface lo inet loopback" _print_interface_config $PUB_DEV_NR $PUB_IP $PUB_NETMASK _print_interface_config $VM_DEV_NR $VM_IP $VM_NETMASK _print_interface_config $MGT_DEV_NR $MGT_IP $MGT_NETMASK } # # Mount the VDI # STAGING_DIR=$($TOP_DIR/scripts/manage-vdi open $GUEST_NAME 0 1 | grep -o "/tmp/tmp.[[:alnum:]]*") add_on_exit "$TOP_DIR/scripts/manage-vdi close $GUEST_NAME 0 1" # Make sure we have a stage if [ ! -d $STAGING_DIR/etc ]; then echo "Stage is not properly set up!" exit 1 fi # Only support DHCP for now - don't support how different versions of Ubuntu handle resolv.conf if [ "$MGT_IP" != "dhcp" ] && [ "$PUB_IP" != "dhcp" ]; then echo "Configuration without DHCP not supported" exit 1 fi # Configure the hostname echo $GUEST_NAME > $STAGING_DIR/etc/hostname # Hostname must resolve for rabbit HOSTS_FILE_IP=$PUB_IP if [ $MGT_IP != "dhcp" ]; then HOSTS_FILE_IP=$MGT_IP fi cat <$STAGING_DIR/etc/hosts 127.0.0.1 localhost localhost.localdomain EOF # Configure the network print_interfaces_config > $STAGING_DIR/etc/network/interfaces os-xenapi-0.3.1/tools/install/scripts/on_exit.sh0000775000175000017500000000054313160424533023050 0ustar jenkinsjenkins00000000000000#!/bin/bash set -e set -o xtrace if [ -z "${on_exit_hooks:-}" ]; then on_exit_hooks=() fi on_exit() { for i in $(seq $((${#on_exit_hooks[*]} - 1)) -1 0); do eval "${on_exit_hooks[$i]}" done } add_on_exit() { local n=${#on_exit_hooks[*]} on_exit_hooks[$n]="$*" if [[ $n -eq 0 ]]; then trap on_exit EXIT fi } os-xenapi-0.3.1/tools/install/scripts/install_ubuntu_template.sh0000775000175000017500000000555013160424533026351 0ustar jenkinsjenkins00000000000000#!/bin/bash # # This creates an Ubuntu Server 32bit or 64bit template # on Xenserver 5.6.x, 6.0.x and 6.1.x # The template does a net install only # # Based on a script by: David Markey # set -o errexit set -o nounset set -o xtrace # This directory THIS_DIR=$(cd $(dirname "$0") && pwd) SCRIPT_DIR="$THIS_DIR/../scripts" COMM_DIR="$THIS_DIR/../common" CONF_DIR="$THIS_DIR/../conf" # xapi functions . $COMM_DIR/functions # For default setings see xenrc source $CONF_DIR/xenrc # Get the params preseed_url=$1 # Delete template or skip template creation as required host=$(get_current_host_uuid) previous_template=$(get_template "$UBUNTU_INST_TEMPLATE_NAME" $host) if [ -n "$previous_template" ]; then if $CLEAN_TEMPLATES; then clean_template_other_conf $previous_template uninstall_template $previous_template else echo "Template $UBUNTU_INST_TEMPLATE_NAME already present" exit 0 fi fi # Get built-in template builtin_name="Debian Squeeze 6.0 (32-bit)" builtin_uuid=$(get_template "$builtin_name" $host) if [[ -z $builtin_uuid ]]; then echo "Can't find the Debian Squeeze 32bit template on your XenServer." exit 1 fi # Clone built-in template to create new template new_uuid=$(xe vm-clone uuid=$builtin_uuid \ new-name-label="$UBUNTU_INST_TEMPLATE_NAME") disk_size=$(($VM_VDI_GB * 1024 * 1024 * 1024)) # Some of these settings can be found in example preseed files # however these need to be answered before the netinstall # is ready to fetch the preseed file, and as such must be here # to get a fully automated install pvargs="quiet console=hvc0 partman/default_filesystem=ext3 \ console-setup/ask_detect=false locale=${UBUNTU_INST_LOCALE} \ keyboard-configuration/layoutcode=${UBUNTU_INST_KEYBOARD} \ netcfg/choose_interface=eth0 \ netcfg/get_hostname=os netcfg/get_domain=os auto \ url=${preseed_url}" if [ "$UBUNTU_INST_IP" != "dhcp" ]; then netcfgargs="netcfg/disable_autoconfig=true \ netcfg/get_nameservers=${UBUNTU_INST_NAMESERVERS} \ netcfg/get_ipaddress=${UBUNTU_INST_IP} \ netcfg/get_netmask=${UBUNTU_INST_NETMASK} \ netcfg/get_gateway=${UBUNTU_INST_GATEWAY} \ netcfg/confirm_static=true" pvargs="${pvargs} ${netcfgargs}" fi xe template-param-set uuid=$new_uuid \ other-config:install-methods=http \ other-config:install-repository="http://${UBUNTU_INST_HTTP_HOSTNAME}${UBUNTU_INST_HTTP_DIRECTORY}" \ PV-args="$pvargs" \ other-config:debian-release="$UBUNTU_INST_RELEASE" \ other-config:default_template=true \ other-config:disks='' \ other-config:install-arch="$UBUNTU_INST_ARCH" if ! [ -z "$UBUNTU_INST_HTTP_PROXY" ]; then xe template-param-set uuid=$new_uuid \ other-config:install-proxy="$UBUNTU_INST_HTTP_PROXY" fi echo "Ubuntu template installed uuid:$new_uuid" os-xenapi-0.3.1/tools/install/scripts/install-os-vpx.sh0000775000175000017500000000542613160424533024310 0ustar jenkinsjenkins00000000000000#!/bin/bash # # Copyright (c) 2011 Citrix Systems, Inc. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # set -eux BRIDGE= NAME_LABEL= TEMPLATE_NAME= usage() { cat << EOF Usage: $0 -t TEMPLATE_NW_INSTALL -l NAME_LABEL [-n BRIDGE] Install a VM from a template OPTIONS: -h Shows this message. -t template VM template to use -l name Specifies the name label for the VM. -n bridge The bridge/network to use for eth0. Defaults to xenbr0 EOF } get_params() { while getopts "hbn:r:l:t:" OPTION; do case $OPTION in h) usage exit 1 ;; n) BRIDGE=$OPTARG ;; l) NAME_LABEL=$OPTARG ;; t) TEMPLATE_NAME=$OPTARG ;; ?) usage exit ;; esac done if [[ -z $BRIDGE ]]; then BRIDGE=xenbr0 fi if [[ -z $TEMPLATE_NAME ]]; then echo "Please specify a template name" >&2 exit 1 fi if [[ -z $NAME_LABEL ]]; then echo "Please specify a name-label for the new VM" >&2 exit 1 fi } xe_min() { local cmd="$1" shift xe "$cmd" --minimal "$@" } find_network() { result=$(xe_min network-list bridge="$1") if [ "$result" = "" ]; then result=$(xe_min network-list name-label="$1") fi echo "$result" } create_vif() { local v="$1" echo "Installing VM interface on [$BRIDGE]" local out_network_uuid out_network_uuid=$(find_network "$BRIDGE") xe vif-create vm-uuid="$v" network-uuid="$out_network_uuid" device="0" } # Make the VM auto-start on server boot. set_auto_start() { local v="$1" xe vm-param-set uuid="$v" other-config:auto_poweron=true } destroy_vifs() { local v="$1" IFS=, for vif in $(xe_min vif-list vm-uuid="$v"); do xe vif-destroy uuid="$vif" done unset IFS } get_params "$@" vm_uuid=$(xe_min vm-install template="$TEMPLATE_NAME" new-name-label="$NAME_LABEL") destroy_vifs "$vm_uuid" set_auto_start "$vm_uuid" create_vif "$vm_uuid" xe vm-param-set actions-after-reboot=Destroy uuid="$vm_uuid" os-xenapi-0.3.1/tools/install_on_xen_host.sh0000775000175000017500000001601013160424533022313 0ustar jenkinsjenkins00000000000000#!/bin/bash set -o errexit set -o nounset set -o xtrace export LC_ALL=C # This directory THIS_DIR=$(cd $(dirname "$0") && pwd) INSTALL_DIR="$THIS_DIR/install" COMM_DIR="$INSTALL_DIR/common" CONF_DIR="$INSTALL_DIR/conf" DEV_STACK_DIR="$INSTALL_DIR/devstack" DISABLE_JOURNALING="false" . $COMM_DIR/functions # Source params source $CONF_DIR/xenrc function print_usage_and_die { cat >&2 << EOF usage: $0 A simple script to use devstack to setup an OpenStack. This script should be executed on a xenserver host. optional arguments: -d DEVSTACK_SRC An URL pointing to a tar.gz snapshot of devstack. This defaults to the official devstack repository. Can also be a local file location. -l LOG_FILE_DIRECTORY The directory in which to store the devstack logs on failure. -w WAIT_TILL_LAUNCH Set it to 1 if user want to pending on the installation until it is done -r DISABLE_JOURNALING Disable journaling if this flag is set. It will reduce disk IO, but may lead to file system unstable after long time use flags: -f Force SR replacement. If your XenServer has an LVM type SR, it will be destroyed and replaced with an ext SR. WARNING: This will destroy your actual default SR ! An example run: # Install devstack $0 mypassword $@ EOF exit 1 } # Defaults for optional arguments DEVSTACK_SRC=${DEVSTACK_SRC:-"https://github.com/openstack-dev/devstack"} LOGDIR="/opt/stack/devstack_logs" WAIT_TILL_LAUNCH=1 FORCE_SR_REPLACEMENT="false" # Number of options passed to this script REMAINING_OPTIONS="$#" # Get optional parameters set +e while getopts ":d:frl:w:" flag; do REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) case "$flag" in d) DEVSTACK_SRC="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; f) FORCE_SR_REPLACEMENT="true" ;; l) LOGDIR="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; w) WAIT_TILL_LAUNCH="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; r) DISABLE_JOURNALING="true" ;; \?) print_usage_and_die "Invalid option -$OPTARG" exit 1 ;; esac done set -e # Make sure that all options processed if [ "0" != "$REMAINING_OPTIONS" ]; then print_usage_and_die "ERROR: some arguments were not recognised!" fi ## # begin install devstack process ## # Verify the host is suitable for devstack echo -n "Verify XenServer has an ext type default SR..." defaultSR=$(xe pool-list params=default-SR minimal=true) currentSrType=$(xe sr-param-get uuid=$defaultSR param-name=type) if [ "$currentSrType" != "ext" -a "$currentSrType" != "nfs" -a "$currentSrType" != "ffs" -a "$currentSrType" != "file" ]; then if [ "true" == "$FORCE_SR_REPLACEMENT" ]; then echo "" echo "" echo "Trying to replace the default SR with an EXT SR" pbd_uuid=`xe pbd-list sr-uuid=$defaultSR minimal=true` host_uuid=`xe pbd-param-get uuid=$pbd_uuid param-name=host-uuid` use_device=`xe pbd-param-get uuid=$pbd_uuid param-name=device-config param-key=device` # Destroy the existing SR xe pbd-unplug uuid=$pbd_uuid xe sr-destroy uuid=$defaultSR sr_uuid=`xe sr-create content-type=user host-uuid=$host_uuid type=ext device-config:device=$use_device shared=false name-label="Local storage"` pool_uuid=`xe pool-list minimal=true` xe pool-param-set default-SR=$sr_uuid uuid=$pool_uuid xe pool-param-set suspend-image-SR=$sr_uuid uuid=$pool_uuid xe sr-param-add uuid=$sr_uuid param-name=other-config i18n-key=local-storage exit 0 fi echo "" echo "" echo "ERROR: The xenserver host must have an EXT3/NFS/FFS/File SR as the default SR" echo "Use the -f flag to destroy the current default SR and create a new" echo "ext type default SR." echo "" echo "WARNING: This will destroy your actual default SR !" echo "" exit 1 fi # create template if needed $INSTALL_DIR/create_ubuntu_template.sh if [ -n "${EXIT_AFTER_JEOS_INSTALLATION:-}" ]; then echo "User requested to quit after JEOS installation" exit 0 fi # install DevStack on the VM OPTARGS="" if [ $DISABLE_JOURNALING = 'true' ]; then OPTARGS="$OPTARGS -r" fi $DEV_STACK_DIR/install_devstack.sh -d $DEVSTACK_SRC -l $LOGDIR $OPTARGS #start openstack domU VM xe vm-start vm="$DEV_STACK_DOMU_NAME" on=$(get_current_host_uuid) # If we have copied our ssh credentials, use ssh to monitor while the installation runs function ssh_no_check { ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "$@" } # Get hold of the Management IP of OpenStack VM OS_VM_MANAGEMENT_ADDRESS=$MGT_IP if [ $OS_VM_MANAGEMENT_ADDRESS == "dhcp" ]; then OS_VM_MANAGEMENT_ADDRESS=$(find_ip_by_name $DEV_STACK_DOMU_NAME $MGT_DEV_NR) fi if [ "$WAIT_TILL_LAUNCH" = "1" ] && [ -e ~/.ssh/id_rsa.pub ]; then set +x echo "VM Launched - Waiting for run.sh" while ! ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "test -e /opt/stack/run_sh.pid"; do echo "VM Launched - Waiting for run.sh" sleep 10 done echo -n "devstack service is running, waiting for stack.sh to start logging..." pid=`ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "cat /opt/stack/run_sh.pid"` if [ -n "$LOGDIR" ]; then while ! ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "test -e ${LOGDIR}/stack.log"; do echo -n "..." sleep 10 done ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "tail --pid $pid -n +1 -f ${LOGDIR}/stack.log" else echo -n "LOGDIR not set; just waiting for process $pid to finish" ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "wait $pid" fi # Fail if devstack did not succeed ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS 'test -e /opt/stack/runsh.succeeded' echo "################################################################################" echo "" echo "All Finished!" echo "You can visit the OpenStack Dashboard" echo "at http://$OS_VM_MANAGEMENT_ADDRESS, and contact other services at the usual ports." else set +x echo "################################################################################" echo "" echo "All Finished!" echo "Now, you can monitor the progress of the stack.sh installation by " echo "looking at the console of your domU / checking the log files." echo "" echo "ssh into your domU now: 'ssh stack@$OS_VM_MANAGEMENT_ADDRESS' using your password" echo "and then do: 'sudo systemctl status devstack' to check if devstack is still running." echo "Check that /opt/stack/runsh.succeeded exists" echo "" echo "When devstack completes, you can visit the OpenStack Dashboard" echo "at http://$OS_VM_MANAGEMENT_ADDRESS, and contact other services at the usual ports." fi os-xenapi-0.3.1/tools/tox_install.sh0000775000175000017500000000203613160424533020605 0ustar jenkinsjenkins00000000000000#!/usr/bin/env bash # Client constraint file contains this client version pin that is in conflict # with installing the client from source. We should remove the version pin in # the constraints file before applying it for from-source installation. CONSTRAINTS_FILE="$1" shift 1 set -e # NOTE(tonyb): Place this in the tox enviroment's log dir so it will get # published to logs.openstack.org for easy debugging. localfile="$VIRTUAL_ENV/log/upper-constraints.txt" if [[ "$CONSTRAINTS_FILE" != http* ]]; then CONSTRAINTS_FILE="file://$CONSTRAINTS_FILE" fi # NOTE(tonyb): need to add curl to bindep.txt if the project supports bindep curl "$CONSTRAINTS_FILE" --insecure --progress-bar --output "$localfile" pip install -c"$localfile" openstack-requirements # This is the main purpose of the script: Allow local installation of # the current repo. It is listed in constraints file and thus any # install will be constrained and we need to unconstrain it. edit-constraints "$localfile" -- "$CLIENT_NAME" pip install -c"$localfile" -U "$@" exit $? os-xenapi-0.3.1/tools/install-devstack-xen.sh0000775000175000017500000004620213160424533022310 0ustar jenkinsjenkins00000000000000#!/bin/bash set -eu function print_usage_and_die { cat >&2 << EOF usage: $0 XENSERVER XENSERVER_PASS PRIVKEY A simple script to use devstack to setup an OpenStack, and optionally run tests on it. This script should be executed on an operator machine, and it will execute commands through ssh on the remote XenServer specified. You can use this script to install all-in-one or multihost OpenStack env. positional arguments: XENSERVER The address of the XenServer XENSERVER_PASS The root password for the XenServer PRIVKEY A passwordless private key to be used for installation. This key will be copied over to the xenserver host, and will be used for migration/resize tasks if multiple XenServers used. If '-' is passed, assume the key is provided by an agent optional arguments: -t TEST_TYPE Type of the tests to run. One of [none, exercise, smoke, full] defaults to none -d DEVSTACK_SRC It can be a local directory containing a local repository or an URL pointing to a remote repository. This defaults to the official devstack repository. -l LOG_FILE_DIRECTORY The directory in which to store the devstack logs on failure. -j JEOS_URL An URL for an xva containing an exported minimal OS template with the name jeos_template_for_ubuntu, to be used as a starting point. -e JEOS_FILENAME Save a JeOS xva to the given filename and quit. The exported file could be re-used later by putting it to a webserver, and specifying JEOS_URL. -s SUPP_PACK_URL URL to a supplemental pack that will be installed on the host before running any tests. The host will not be rebooted after installing the supplemental pack, so new kernels will not be picked up. -o OS_XENAPI_SRC It can be a local directory containing a local repository or an URL pointing to a remote repository. This defaults to the official os-xenapi repository. -w WAIT_TILL_LAUNCH Set it to 1 if user want to pending on the installation until it is done -a NODE_TYPE OpenStack node type [all, compute] -m NODE_NAME DomU name for installing OpenStack -i CONTROLLER_IP IP address of controller node, must set it when installing compute node flags: -f Force SR replacement. If your XenServer has an LVM type SR, it will be destroyed and replaced with an ext SR. WARNING: This will destroy your actual default SR ! -n No devstack, just create the JEOS template that could be exported to an xva using the -e option. -r Disable journaling if this flag is set. It will reduce disk IO, but may lead to file system unstable after long time use An example run: # Create a passwordless ssh key ssh-keygen -t rsa -N "" -f devstack_key.priv # Install devstack all-in-one (controller and compute node together) $0 XENSERVER mypassword devstack_key.priv or $0 XENSERVER mypassword devstack_key.priv -a all -m # Install devstack compute node $0 XENSERVER mypassword devstack_key.priv -a compute -m -i $@ EOF exit 1 } # Defaults for optional arguments DEVSTACK_SRC=${DEVSTACK_SRC:-"https://github.com/openstack-dev/devstack"} OS_XENAPI_SRC=${OS_XENAPI_SRC:-"https://github.com/openstack/os-xenapi"} TEST_TYPE="none" FORCE_SR_REPLACEMENT="false" EXIT_AFTER_JEOS_INSTALLATION="" LOG_FILE_DIRECTORY="" JEOS_URL="" JEOS_FILENAME="" SUPP_PACK_URL="" LOGDIR="/opt/stack/devstack_logs" WAIT_TILL_LAUNCH=1 JEOS_TEMP_NAME="jeos_template_for_ubuntu" NODE_TYPE="all" NODE_NAME="" CONTROLLER_IP="" DISABLE_JOURNALING="false" DEFAULT_INSTALL_SRC="$(mktemp -d --suffix=install)" # Get Positional arguments set +u XENSERVER="$1" shift || print_usage_and_die "ERROR: XENSERVER not specified!" XENSERVER_PASS="$1" shift || print_usage_and_die "ERROR: XENSERVER_PASS not specified!" PRIVKEY="$1" shift || print_usage_and_die "ERROR: PRIVKEY not specified!" set -u # Number of options passed to this script REMAINING_OPTIONS="$#" # Get optional parameters set +e while getopts ":t:d:fnrl:j:e:o:s:w:a:i:m:" flag; do REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) case "$flag" in t) TEST_TYPE="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) if ! [ "$TEST_TYPE" = "none" -o "$TEST_TYPE" = "smoke" -o "$TEST_TYPE" = "full" -o "$TEST_TYPE" = "exercise" ]; then print_usage_and_die "$TEST_TYPE - Invalid value for TEST_TYPE" fi ;; d) DEVSTACK_SRC="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; f) FORCE_SR_REPLACEMENT="true" ;; n) EXIT_AFTER_JEOS_INSTALLATION="true" ;; l) LOG_FILE_DIRECTORY="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; j) JEOS_URL="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; e) JEOS_FILENAME="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; s) SUPP_PACK_URL="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; o) OS_XENAPI_SRC="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; w) WAIT_TILL_LAUNCH="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; a) NODE_TYPE="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) if [ $NODE_TYPE != "all" ] && [ $NODE_TYPE != "compute" ]; then print_usage_and_die "$NODE_TYPE - Invalid value for NODE_TYPE" fi ;; i) CONTROLLER_IP="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; m) NODE_NAME="$OPTARG" REMAINING_OPTIONS=$(expr "$REMAINING_OPTIONS" - 1) ;; r) DISABLE_JOURNALING="true" ;; \?) print_usage_and_die "Invalid option -$OPTARG" ;; esac done set -e if [ "$TEST_TYPE" != "none" ] && [ $WAIT_TILL_LAUNCH -ne 1 ]; then echo "WARNING: You can't perform a test even before the insallation done, force set WAIT_TILL_LAUNCH to 1" WAIT_TILL_LAUNCH=1 fi if [ "$TEST_TYPE" != "none" ] && [ "$EXIT_AFTER_JEOS_INSTALLATION" = "true" ]; then print_usage_and_die "ERROR: You can't perform a test without a devstack invironment, exit" fi # Make sure that all options processed if [ "0" != "$REMAINING_OPTIONS" ]; then print_usage_and_die "ERROR: some arguments were not recognised!" fi # Give DomU a default name when installing all-in-one if [[ "$NODE_TYPE" = "all" && "$NODE_NAME" = "" ]]; then NODE_NAME="DevStackOSDomU" fi # Check CONTROLLER_IP is set when installing a compute node if [ "$NODE_TYPE" = "compute" ]; then if [[ "$CONTROLLER_IP" = "" || "$NODE_NAME" = "" ]]; then print_usage_and_die "ERROR: CONTROLLER_IP or NODE_NAME not specified when installing compute node!" fi if [ "$TEST_TYPE" != "none" ]; then print_usage_and_die "ERROR: Cannot do test on compute node!" fi fi # Set up internal variables _SSH_OPTIONS="\ -o BatchMode=yes \ -o StrictHostKeyChecking=no \ -o UserKnownHostsFile=/dev/null" if [ "$PRIVKEY" != "-" ]; then _SSH_OPTIONS="$_SSH_OPTIONS -i $PRIVKEY" fi # Print out summary cat << EOF XENSERVER: $XENSERVER XENSERVER_PASS: $XENSERVER_PASS PRIVKEY: $PRIVKEY TEST_TYPE: $TEST_TYPE NODE_TYPE: $NODE_TYPE NODE_NAME: $NODE_NAME CONTROLLER_IP: $CONTROLLER_IP DEVSTACK_SRC: $DEVSTACK_SRC OS_XENAPI_SRC: $OS_XENAPI_SRC FORCE_SR_REPLACEMENT: $FORCE_SR_REPLACEMENT JEOS_URL: ${JEOS_URL:-template will not be imported} JEOS_FILENAME: ${JEOS_FILENAME:-not exporting JeOS} SUPP_PACK_URL: ${SUPP_PACK_URL:-no supplemental pack} EOF # Helper function function on_xenserver() { ssh $_SSH_OPTIONS "root@$XENSERVER" bash -s -- } function assert_tool_exists() { local tool_name tool_name="$1" if ! which "$tool_name" >/dev/null; then echo "ERROR: $tool_name is required for this script, please install it on your system! " >&2 exit 1 fi } if [ "$PRIVKEY" != "-" ]; then echo "Setup ssh keys on XenServer..." tmp_dir="$(mktemp -d --suffix=OpenStack)" echo "Use $tmp_dir for public/private keys..." cp $PRIVKEY "$tmp_dir/devstack" ssh-keygen -y -f $PRIVKEY > "$tmp_dir/devstack.pub" assert_tool_exists sshpass echo "Setup public key to XenServer..." DEVSTACK_PUB=$(cat $tmp_dir/devstack.pub) sshpass -p "$XENSERVER_PASS" \ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \ root@$XENSERVER "echo $DEVSTACK_PUB >> ~/.ssh/authorized_keys" scp $_SSH_OPTIONS $PRIVKEY "root@$XENSERVER:.ssh/id_rsa" scp $_SSH_OPTIONS $tmp_dir/devstack.pub "root@$XENSERVER:.ssh/id_rsa.pub" rm -rf "$tmp_dir" unset tmp_dir echo "OK" fi DEFAULT_SR_ID=$(on_xenserver < \ /root/artifacts/domU.tgz < /dev/null || true fi tar --ignore-failed-read -czf /root/artifacts/dom0.tgz /var/log/messages* /var/log/xensource* /var/log/SM* || true END_OF_XENSERVER_COMMANDS mkdir -p $LOG_FILE_DIRECTORY scp $_SSH_OPTIONS $XENSERVER:artifacts/* $LOG_FILE_DIRECTORY tar -xzf $LOG_FILE_DIRECTORY/domU.tgz opt/stack/tempest/tempest-full.xml -O \ > $LOG_FILE_DIRECTORY/tempest-full.xml || true fi } echo -n "Generate id_rsa.pub..." echo "ssh-keygen -y -f .ssh/id_rsa > .ssh/id_rsa.pub" | on_xenserver echo "OK" echo -n "Verify that XenServer can log in to itself..." if echo "ssh -o StrictHostKeyChecking=no $XENSERVER true" | on_xenserver; then echo "OK" else echo "" echo "" echo "ERROR: XenServer couldn't authenticate to itself. This might" echo "be caused by having a key originally installed on XenServer" echo "consider using the -w parameter to wipe all your ssh settings" echo "on XenServer." exit 1 fi echo "OK" if [ -n "$SUPP_PACK_URL" ]; then echo -n "Applying supplemental pack" on_xenserver < /dev/null done fi rm -f $TMP_TEMPLATE_DIR/jeos-for-devstack.xva echo " downloading $JEOS_URL to $TMP_TEMPLATE_DIR/jeos-for-devstack.xva" wget -qO $TMP_TEMPLATE_DIR/jeos-for-devstack.xva "$JEOS_URL" echo " importing $TMP_TEMPLATE_DIR/jeos-for-devstack.xva" xe vm-import filename=$TMP_TEMPLATE_DIR/jeos-for-devstack.xva rm -rf $TMP_TEMPLATE_DIR echo " verify template imported" JEOS_TEMPLATE="\$(. "$COMM_DIR/functions" && get_template $JEOS_TEMP_NAME $(get_current_host_uuid)) if [ -z "\$JEOS_TEMPLATE" ]; then echo "FATAL: template $JEOS_TEMP_NAME does not exist after import." exit 1 fi END_OF_JEOS_IMPORT echo "OK" fi # Got install repositories. # If input repository is an URL, for os-xenapi, it is only needed on xenserver, # so we will download it and move it to xenserver when needed; for devstack, it # is needed on DomU, so we configure a service on DomU and download it after # DomU first bootup. if [ -d $DEVSTACK_SRC ]; then # Local repository for devstack exist, copy it to default directory for # unified treatment cp -rf $DEVSTACK_SRC $DEFAULT_INSTALL_SRC DEVSTACK_SRC=$DEFAULT_INSTALL_SRC/devstack fi if [ ! -d $OS_XENAPI_SRC ]; then # Local repository for os-xenapi does not exist, OS_XENAPI_SRC must be a git # URL. Download it to default directory git clone $OS_XENAPI_SRC $DEFAULT_INSTALL_SRC/os-xenapi else # Local repository for os-xenapi exists, copy it to default directory # unified treatment cp -rf $OS_XENAPI_SRC $DEFAULT_INSTALL_SRC fi TMPDIR=$(echo "mktemp -d" | on_xenserver) set +u DOM0_OPT_DIR=$TMPDIR/domU ssh $_SSH_OPTIONS root@$XENSERVER "[ -d $DOM0_OPT_DIR ] && echo ok || mkdir -p $DOM0_OPT_DIR" tar -zcvf local_res.tar.gz $DEFAULT_INSTALL_SRC scp $_SSH_OPTIONS local_res.tar.gz root@$XENSERVER:$DOM0_OPT_DIR rm -f local_res.tar.gz DOM0_OS_API_DIR=$DOM0_OPT_DIR/os-xenapi if [ -d $DEVSTACK_SRC ]; then DEVSTACK_SRC=$DOM0_OPT_DIR/devstack fi copy_logs_on_failure on_xenserver << END_OF_XENSERVER_COMMANDS cd $DOM0_OPT_DIR tar -zxvf local_res.tar.gz # remove root flag DEFAULT_INSTALL_SRC=${DEFAULT_INSTALL_SRC#*/} mv \$DEFAULT_INSTALL_SRC/* ./ DOM0_TOOL_DIR="$DOM0_OS_API_DIR/tools" DOM0_INSTALL_DIR="\$DOM0_TOOL_DIR/install" cd \$DOM0_INSTALL_DIR # override items in xenrc sed -i "s/DevStackOSDomU/$NODE_NAME/g" \$DOM0_INSTALL_DIR/conf/xenrc # prepare local.conf cat << LOCALCONF_CONTENT_ENDS_HERE > local.conf # ``local.conf`` is a user-maintained settings file that is sourced from ``stackrc``. # This gives it the ability to override any variables set in ``stackrc``. # The ``localrc`` section replaces the old ``localrc`` configuration file. # Note that if ``localrc`` is present it will be used in favor of this section. # -------------------------------- [[local|localrc]] enable_plugin os-xenapi https://github.com/openstack/os-xenapi.git # workaround for bug/1709594 CELLSV2_SETUP=singleconductor # Passwords MYSQL_PASSWORD=citrix SERVICE_TOKEN=citrix ADMIN_PASSWORD=citrix SERVICE_PASSWORD=citrix RABBIT_PASSWORD=citrix GUEST_PASSWORD=citrix XENAPI_PASSWORD="$XENSERVER_PASS" SWIFT_HASH="66a3d6b56c1f479c8b4e70ab5c2000f5" # Nice short names, so we could export an XVA VM_BRIDGE_OR_NET_NAME="osvmnet" PUB_BRIDGE_OR_NET_NAME="ospubnet" # Do not use secure delete CINDER_SECURE_DELETE=False # Compute settings VIRT_DRIVER=xenserver # Tempest settings TERMINATE_TIMEOUT=90 BUILD_TIMEOUT=600 # DevStack settings LOGDIR=${LOGDIR} LOGFILE=${LOGDIR}/stack.log # Turn on verbosity (password input does not work otherwise) VERBOSE=True # XenAPI specific XENAPI_CONNECTION_URL="http://$XENSERVER" VNCSERVER_PROXYCLIENT_ADDRESS="$XENSERVER" # Neutron specific part Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan,flat Q_ML2_TENANT_NETWORK_TYPE=vxlan VLAN_INTERFACE=eth1 PUBLIC_INTERFACE=eth2 LOCALCONF_CONTENT_ENDS_HERE if [ "$NODE_TYPE" = "all" ]; then cat << LOCALCONF_CONTENT_ENDS_HERE >> local.conf ENABLED_SERVICES+=,neutron,q-domua LOCALCONF_CONTENT_ENDS_HERE else cat << LOCALCONF_CONTENT_ENDS_HERE >> local.conf ENABLED_SERVICES=neutron,q-agt,q-domua,n-cpu,placement-client,dstat SERVICE_HOST=$CONTROLLER_IP MYSQL_HOST=$CONTROLLER_IP GLANCE_HOST=$CONTROLLER_IP RABBIT_HOST=$CONTROLLER_IP KEYSTONE_AUTH_HOST=$CONTROLLER_IP LOCALCONF_CONTENT_ENDS_HERE fi cat << LOCALCONF_CONTENT_ENDS_HERE >> local.conf # Nova user specific configuration # -------------------------------- [[post-config|\\\$NOVA_CONF]] [DEFAULT] disk_allocation_ratio = 2.0 LOCALCONF_CONTENT_ENDS_HERE # begin installation process cd \$DOM0_TOOL_DIR OPTARGS="" if [ $FORCE_SR_REPLACEMENT = 'true' ]; then OPTARGS="\$OPTARGS -f" fi if [ $DISABLE_JOURNALING = 'true' ]; then OPTARGS="\$OPTARGS -r" fi ./install_on_xen_host.sh -d $DEVSTACK_SRC -l $LOGDIR -w $WAIT_TILL_LAUNCH \$OPTARGS END_OF_XENSERVER_COMMANDS on_xenserver << END_OF_RM_TMPDIR #delete install dir rm $TMPDIR -rf END_OF_RM_TMPDIR # Sync compute node info in controller node if [ "$NODE_TYPE" = "compute" ]; then set +x echo "################################################################################" echo "" echo "Sync compute node info in controller node!" ssh $_SSH_OPTIONS stack@$CONTROLLER_IP bash -s -- << END_OF_SYNC_COMPUTE_COMMANDS set -exu cd /opt/stack/devstack/tools/ . discover_hosts.sh END_OF_SYNC_COMPUTE_COMMANDS fi if [ "$TEST_TYPE" == "none" ]; then exit 0 fi # Run tests DOM0_FUNCTION_DIR="$DOM0_OS_API_DIR/install/common" copy_logs_on_failure on_xenserver << END_OF_XENSERVER_COMMANDS set -exu GUEST_IP=\$(. $DOM0_FUNCTION_DIR/functions && find_ip_by_name $NODE_NAME 0) ssh -q \ -o Batchmode=yes \ -o StrictHostKeyChecking=no \ -o UserKnownHostsFile=/dev/null \ "stack@\$GUEST_IP" bash -s -- << END_OF_DEVSTACK_COMMANDS set -exu cd /opt/stack/tempest if [ "$TEST_TYPE" == "exercise" ]; then tox -eall tempest.scenario.test_server_basic_ops elif [ "$TEST_TYPE" == "smoke" ]; then #./run_tests.sh -s -N tox -esmoke elif [ "$TEST_TYPE" == "full" ]; then #nosetests -sv --with-xunit --xunit-file=tempest-full.xml tempest/api tempest/scenario tempest/thirdparty tempest/cli tox -efull fi END_OF_DEVSTACK_COMMANDS END_OF_XENSERVER_COMMANDS rm -rf $DEFAULT_INSTALL_SRC copy_logs os-xenapi-0.3.1/ChangeLog0000664000175000017500000000751213160424744016330 0ustar jenkinsjenkins00000000000000CHANGES ======= 0.3.1 ----- * os-xenapi: Fix configure driver creating issue * Set host=${dom0\_hostname} in related conf * Updated from global requirements * os-xenapi: fix tempest test error from glance * Updated from global requirements * os-xenapi: xe cmd failed after set disable journaling to true * os-xenapi: FS journaling flag failed to pass 0.3.0 ----- * os-xenapi: Support deploying devstack in xapi-pool * Support VDI streaming * os-xenpai: add option to disable FS journaling * Updated from global requirements * Enable dstat service on all nodes * Use singleconductor mode * Avoid using sudo in non-interactive execution * Persist journald log storage * XenAPI: fix the ephemeral disk failure on XS7.x * os-xenapi: Add XAPI pools support for openstack on xenserver * There is no documentation for os-xenapi * Drop MANIFEST.in - it's not needed by pbr * Removed the older version of python and added 3.5 * os-xenapi: fix CI to fit the change that glance-api use uwsgi * os-xenapi: Grammatical errors about swap host function * Revert "Replace basestring with six.string\_types" * Fix an error in VM migration with volumes * Replace basestring with six.string\_types * os-xenapi: Exception Error logs shown in Citrix XenServer CI * os-xenapi: Exception Error logs shown in Citrix XenServer CI * Support installing and testing multi-host OS * os-xenapi: fix ssh failure and modify jeos template name * os-xenapi: Add readme guild to xenserver devstack install script * os-xenapi: remove install dependence with devstack 2: * Updated from global requirements * Move install-devstack-xen.sh script from QA repo * Remove bittorrent related functions in dom0 plugin * Install conntrack and create image/kernel dir in Dom0 * Fix coverage test errors in os-xenapi * os-xenapi: remove install dependence with devstack 1: * Make plugin installation supporting both master and stable branches * Updated from global requirements * Updated from global requirements 0.2.0 ----- * XenAPI: add unit test for the plugin - glance: the last part * XenAPI: add unit test for Dom0 plugin xenhost.py: other * XenAPI: add unit test for Dom0 plugin xenhost.py: Network * XenAPI: add unit test for the plugin - glance: first part * XenAPI: add unit test for Dom0 plugin xenhost.py: conf opts * Devstack plugin add support of install ceilometer * Update copyrights for new added files in this repo * XenAPI: add unit test for Dom0 plugin xenhost.py: host opts * XenAPI: add unit test for Dom0 plugin xenhost.py: VM operations * XenAPI: add unit test for Dom0 plugin xenhost.py: run\_cmd tests * os-xenapi v2: Expose python interfaces for some Dom0 plugins * os-xenapi: add wrapper for complicated plugins * os-xenapi: add unit tests for agent.py * Revert "os-xenapi: add unit tests for agent.py" * os-xenapi: add unit tests for agent.py * os-xenapi: modify timeout setting to avoid long time test * os-xenapi: fix TypeError in agent.py when throws an exception * os-xenapi: add a maximum retry count for vbd unplug * Fix unit tests to be executed inside a chroot * Use os-xenapi for neutron dom0 plugin * Create ovs port with other params together * Move scripts of building rpm to os-xenapi * Updated from global requirements * Use default br-int for ovs-agent in Dom0 * Enable neutron VxLAN * Set default value for host * Move image configuration from devstack to os-xenapi * Set defaults for Tempest * Install XenAPI for neutron * Add devstack-plugins in os-xenapi * Add Constraints support 0.1.1 ----- * Remove uesless check * Minor fix for letfovers 0.1.0 ----- * Fix metadata for first release * Fix coverage test configuration * Fix stderr.write error in XenAPI.py * Add dom0 plugins * Add XenAPI.py into os-xenapi repo * Updated from global requirements * Updated from global requirements * Add session support for os-xenapi * Updated from global requirements * Initial cookiecutter commit * Added .gitreview os-xenapi-0.3.1/CONTRIBUTING.rst0000664000175000017500000000121513160424533017205 0ustar jenkinsjenkins00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps in this page: http://docs.openstack.org/infra/manual/developers.html If you already have a good understanding of how the system works and your OpenStack accounts are set up, you can skip to the development workflow section of this documentation to learn how changes to OpenStack should be submitted for review via the Gerrit tool: http://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/os-xenapi os-xenapi-0.3.1/releasenotes/0000775000175000017500000000000013160424745017243 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/notes/0000775000175000017500000000000013160424745020373 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/notes/.placeholder0000664000175000017500000000000013160424533022637 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/source/0000775000175000017500000000000013160424745020543 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/source/_templates/0000775000175000017500000000000013160424745022700 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/source/_templates/.placeholder0000664000175000017500000000000013160424533025144 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/source/conf.py0000664000175000017500000002152013160424533022035 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Glance Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'oslosphinx', 'reno.sphinxext', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'os_xenapi Release Notes' copyright = u'2016, Citrix Systems' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. # The full version, including alpha/beta/rc tags. release = '' # The short X.Y version. version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'GlanceReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'GlanceReleaseNotes.tex', u'Glance Release Notes Documentation', u'Glance Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'glancereleasenotes', u'Glance Release Notes Documentation', [u'Glance Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'GlanceReleaseNotes', u'Glance Release Notes Documentation', u'Glance Developers', 'GlanceReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] os-xenapi-0.3.1/releasenotes/source/_static/0000775000175000017500000000000013160424745022171 5ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/source/_static/.placeholder0000664000175000017500000000000013160424533024435 0ustar jenkinsjenkins00000000000000os-xenapi-0.3.1/releasenotes/source/unreleased.rst0000664000175000017500000000016013160424533023414 0ustar jenkinsjenkins00000000000000============================== Current Series Release Notes ============================== .. release-notes:: os-xenapi-0.3.1/releasenotes/source/index.rst0000664000175000017500000000024013160424533022373 0ustar jenkinsjenkins00000000000000============================================ os_xenapi Release Notes ============================================ .. toctree:: :maxdepth: 1 unreleased os-xenapi-0.3.1/Makefile0000664000175000017500000000204313160424533016204 0ustar jenkinsjenkins00000000000000 THIS_DIR=$(shell pwd) RPMBUILD_DIR=${THIS_DIR}/os_xenapi/dom0/rpmbuild PACKAGE=xenapi-plugins VERSION_FILE=${THIS_DIR}/os_xenapi/dom0/etc/xapi.d/plugins/dom0_plugin_version.py VERSION=$(shell awk '/PLUGIN_VERSION = / {gsub(/"/, ""); print $$3}' ${VERSION_FILE}) RPM_NAME=${PACKAGE}-${VERSION}-1.noarch.rpm rpm: ${THIS_DIR}/output/${RPM_NAME} ${THIS_DIR}/output/${RPM_NAME}: mkdir -p ${THIS_DIR}/output mkdir -p ${RPMBUILD_DIR} @for dir in BUILD BUILDROOT SRPMS RPMS SPECS SOURCES; do \ rm -rf ${RPMBUILD_DIR}/$$dir; \ mkdir -p ${RPMBUILD_DIR}/$$dir; \ done cp ${THIS_DIR}/os_xenapi/dom0/${PACKAGE}.spec ${RPMBUILD_DIR}/SPECS rm -rf /tmp/${PACKAGE} mkdir /tmp/${PACKAGE} cp -r ${THIS_DIR}/os_xenapi/dom0/etc/xapi.d /tmp/${PACKAGE} tar czf ${RPMBUILD_DIR}/SOURCES/${PACKAGE}-${VERSION}.tar.gz -C /tmp ${PACKAGE} rpmbuild -ba --nodeps --define "_topdir ${RPMBUILD_DIR}" --define "version ${VERSION}" ${RPMBUILD_DIR}/SPECS/${PACKAGE}.spec mv ${RPMBUILD_DIR}/RPMS/noarch/* ${THIS_DIR}/output .PHONY: clean clean: rm -rf ${RPMBUILD_DIR}os-xenapi-0.3.1/PKG-INFO0000664000175000017500000002746313160424745015663 0ustar jenkinsjenkins00000000000000Metadata-Version: 1.1 Name: os-xenapi Version: 0.3.1 Summary: XenAPI library for OpenStack projects Home-page: http://www.citrix.com Author: Citrix Author-email: openstack@citrix.com License: UNKNOWN Description-Content-Type: UNKNOWN Description: ========= os-xenapi ========= XenAPI library for OpenStack projects This library provides the support functions needed to connect to and manage a XenAPI-based hypervisor, such as Citrix's XenServer. * Free software: Apache license * Source: http://git.openstack.org/cgit/openstack/os-xenapi * Bugs: http://bugs.launchpad.net/os-xenapi Features -------- * TODO ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Install Devstack on XenServer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Getting Started With XenServer and Devstack ___________________________________________ The purpose of the code in the install directory is to help developers bootstrap a XenServer(7.0 and above) + OpenStack development environment. This guide gives some pointers on how to get started. Xenserver is a Type 1 hypervisor, so it is best installed on bare metal. The OpenStack services are configured to run within a virtual machine on the XenServer host. The VM uses the XAPI toolstack to communicate with the host over a network connection (see `MGT_BRIDGE_OR_NET_NAME`). The provided local.conf helps to build a basic devstack environment. Introduction ............ Requirements ************ - A management network with access to the internet - A DHCP server to provide addresses on this management network - XenServer 7.0 or above installed with a local EXT SR (labelled "Optimised for XenDesktop" in the installer) or a remote NFS SR This network will be used as the OpenStack management network. The VM (Tenant) Network and the Public Network will not be connected to any physical interfaces, only new virtual networks which will be created by the `install_on_xen_host.sh` script. Steps to follow *************** You should install the XenServer host first, then launch the devstack installation in one of two ways, - From a remote linux client (Recommended) - Download install-devstack-xen.sh to the linux client - Configure the local.conf contents in install-devstack-xen.sh. - Generate passwordless ssh key using "ssh-keygen -t rsa -N "" -f devstack_key.priv" - Launch script using "install-devstack-xen.sh XENSERVER mypassword devstack_key.priv" with some optional arguments - On the XenServer host - Download os-xenapi to XenServer - Create and customise a `local.conf` - Start `install_on_xen_host.sh` script Brief explanation ***************** The `install-devstack-xen.sh` script will: - Verify some pre-requisites to installation - Download os-xenapi folder to XenServer host - Generate a standard local.conf file - Call install_on_xen_host.sh to do devstack installation - Run tempest test if required The 'install_on_xen_host.sh' script will: - Verify the host configuration - Create template for devstack DomU VM if needed. Including: - Creating the named networks, if they don't exist - Preseed-Netinstall an Ubuntu Virtual Machine , with 1 network interface: - `eth0` - Connected to `UBUNTU_INST_BRIDGE_OR_NET_NAME` (which defaults to `MGT_BRIDGE_OR_NET_NAME`) - After the Ubuntu install process has finished, the network configuration is modified to: - `eth0` - Management interface, connected to `MGT_BRIDGE_OR_NET_NAME`. Note that XAPI must be accessible through this network. - `eth1` - VM interface, connected to `VM_BRIDGE_OR_NET_NAME` - `eth2` - Public interface, connected to `PUB_BRIDGE_OR_NET_NAME` - Create a template of the VM and destroy the current VM - Create DomU VM according to the template and ssh to the VM - Create a linux service to enable devstack service after VM reboot. The service will: - Download devstack source code if needed - Call unstack.sh and stack.sh to install devstack - Reboot DomU VM Step 1: Install Xenserver ......................... Install XenServer on a clean box. You can download the latest XenServer for free from: http://www.xenserver.org/ The XenServer IP configuration depends on your local network setup. If you are using dhcp, make a reservation for XenServer, so its IP address won't change over time. Make a note of the XenServer's IP address, as it has to be specified in `local.conf`. The other option is to manually specify the IP setup for the XenServer box. Please make sure, that a gateway and a nameserver is configured, as `install-devstack-xen.sh` will connect to github.com to get source-code snapshots. OpenStack currently only supports file-based (thin provisioned) SR types EXT and NFS. As such the default SR should either be a local EXT SR or a remote NFS SR. To create a local EXT SR use the "Optimised for XenDesktop" option in the XenServer host installer. Step 2: Download install-devstack-xen.sh ........................................ On your remote linux client, get the install script from https://raw.githubusercontent.com/openstack/os-xenapi/master/tools/install-devstack-xen.sh Step 3: local.conf overview ........................... Devstack uses a local.conf for user-specific configuration. install-devstack-xen provides a configuration file which is suitable for many simple use cases. In more advanced use cases, you may need to configure the local.conf file after installation - or use the second approach outlined above to bypass the install-devstack-xen script. local.conf sample:: [[local|localrc]] enable_plugin os-xenapi https://github.com/openstack/os-xenapi.git # Passwords MYSQL_PASSWORD=citrix SERVICE_TOKEN=citrix ADMIN_PASSWORD=citrix SERVICE_PASSWORD=citrix RABBIT_PASSWORD=citrix GUEST_PASSWORD=citrix XENAPI_PASSWORD="$XENSERVER_PASS" SWIFT_HASH="66a3d6b56c1f479c8b4e70ab5c2000f5" # Do not use secure delete CINDER_SECURE_DELETE=False # Compute settings VIRT_DRIVER=xenserver # Tempest settings TERMINATE_TIMEOUT=90 BUILD_TIMEOUT=600 # DevStack settings LOGDIR=${LOGDIR} LOGFILE=${LOGDIR}/stack.log # Turn on verbosity (password input does not work otherwise) VERBOSE=True # XenAPI specific XENAPI_CONNECTION_URL="http://$XENSERVER_IP" VNCSERVER_PROXYCLIENT_ADDRESS="$XENSERVER_IP" # Neutron specific part ENABLED_SERVICES+=neutron,q-domua Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan,flat Q_ML2_TENANT_NETWORK_TYPE=vxlan VLAN_INTERFACE=eth1 PUBLIC_INTERFACE=eth2 Step 4: Run `./install-devstack-xen.sh` on your remote linux client ................................................................... An example:: # Create a passwordless ssh key ssh-keygen -t rsa -N "" -f devstack_key.priv # Install devstack ./install-devstack-xen.sh XENSERVER mypassword devstack_key.priv If you don't select wait till launch (using "-w 0" option), once this script finishes executing, login the VM (DevstackOSDomU) that it installed and tail the /opt/stack/devstack_logs/stack.log file. You will need to wait until it stack.log has finished executing. Appendix ________ This section contains useful information for using specific ubuntu network mirrors, which may be required for specific environments to resolve specific access or performance issues. As these are advanced options, the "install-devstack-xen" approach does not support them. If you wish to use these options, please follow the approach outlined above which involves manually downloading os-xenapi and configuring local.conf (or xenrc in the below cases) Using a specific Ubuntu mirror for installation ............................................... To speed up the Ubuntu installation, you can use a specific mirror. To specify a mirror explicitly, include the following settings in your `xenrc` file: sample code:: UBUNTU_INST_HTTP_HOSTNAME="archive.ubuntu.com" UBUNTU_INST_HTTP_DIRECTORY="/ubuntu" These variables set the `mirror/http/hostname` and `mirror/http/directory` settings in the ubuntu preseed file. The minimal ubuntu VM will use the specified parameters. Use an http proxy to speed up Ubuntu installation ................................................. To further speed up the Ubuntu VM and package installation, an internal http proxy could be used. `squid-deb-proxy` has proven to be stable. To use an http proxy, specify the following in your `xenrc` file: sample code:: UBUNTU_INST_HTTP_PROXY="http://ubuntu-proxy.somedomain.com:8000" Exporting the Ubuntu VM to an XVA ********************************* Assuming you have an nfs export, `TEMPLATE_NFS_DIR`, the following sample code will export the jeos (just enough OS) template to an XVA that can be re-imported at a later date. sample code:: TEMPLATE_FILENAME=devstack-jeos.xva TEMPLATE_NAME=jeos_template_for_ubuntu mountdir=$(mktemp -d) mount -t nfs "$TEMPLATE_NFS_DIR" "$mountdir" VM="$(xe template-list name-label="$TEMPLATE_NAME" --minimal)" xe template-export template-uuid=$VM filename="$mountdir/$TEMPLATE_FILENAME" umount "$mountdir" rm -rf "$mountdir" Import the Ubuntu VM ******************** Given you have an nfs export `TEMPLATE_NFS_DIR` where you exported the Ubuntu VM as `TEMPLATE_FILENAME`: sample code:: mountdir=$(mktemp -d) mount -t nfs "$TEMPLATE_NFS_DIR" "$mountdir" xe vm-import filename="$mountdir/$TEMPLATE_FILENAME" umount "$mountdir" rm -rf "$mountdir" Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.5 os-xenapi-0.3.1/AUTHORS0000664000175000017500000000062513160424744015624 0ustar jenkinsjenkins00000000000000Arundhati Surpur Bob Ball Huan Xie Ihar Hrachyshka Javier Pena Jianghua Wang John Hua Luong Anh Tuan Tony Breeds jianghua wang naichuans os-xenapi-0.3.1/README.rst0000664000175000017500000002204513160424533016237 0ustar jenkinsjenkins00000000000000========= os-xenapi ========= XenAPI library for OpenStack projects This library provides the support functions needed to connect to and manage a XenAPI-based hypervisor, such as Citrix's XenServer. * Free software: Apache license * Source: http://git.openstack.org/cgit/openstack/os-xenapi * Bugs: http://bugs.launchpad.net/os-xenapi Features -------- * TODO ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Install Devstack on XenServer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Getting Started With XenServer and Devstack ___________________________________________ The purpose of the code in the install directory is to help developers bootstrap a XenServer(7.0 and above) + OpenStack development environment. This guide gives some pointers on how to get started. Xenserver is a Type 1 hypervisor, so it is best installed on bare metal. The OpenStack services are configured to run within a virtual machine on the XenServer host. The VM uses the XAPI toolstack to communicate with the host over a network connection (see `MGT_BRIDGE_OR_NET_NAME`). The provided local.conf helps to build a basic devstack environment. Introduction ............ Requirements ************ - A management network with access to the internet - A DHCP server to provide addresses on this management network - XenServer 7.0 or above installed with a local EXT SR (labelled "Optimised for XenDesktop" in the installer) or a remote NFS SR This network will be used as the OpenStack management network. The VM (Tenant) Network and the Public Network will not be connected to any physical interfaces, only new virtual networks which will be created by the `install_on_xen_host.sh` script. Steps to follow *************** You should install the XenServer host first, then launch the devstack installation in one of two ways, - From a remote linux client (Recommended) - Download install-devstack-xen.sh to the linux client - Configure the local.conf contents in install-devstack-xen.sh. - Generate passwordless ssh key using "ssh-keygen -t rsa -N "" -f devstack_key.priv" - Launch script using "install-devstack-xen.sh XENSERVER mypassword devstack_key.priv" with some optional arguments - On the XenServer host - Download os-xenapi to XenServer - Create and customise a `local.conf` - Start `install_on_xen_host.sh` script Brief explanation ***************** The `install-devstack-xen.sh` script will: - Verify some pre-requisites to installation - Download os-xenapi folder to XenServer host - Generate a standard local.conf file - Call install_on_xen_host.sh to do devstack installation - Run tempest test if required The 'install_on_xen_host.sh' script will: - Verify the host configuration - Create template for devstack DomU VM if needed. Including: - Creating the named networks, if they don't exist - Preseed-Netinstall an Ubuntu Virtual Machine , with 1 network interface: - `eth0` - Connected to `UBUNTU_INST_BRIDGE_OR_NET_NAME` (which defaults to `MGT_BRIDGE_OR_NET_NAME`) - After the Ubuntu install process has finished, the network configuration is modified to: - `eth0` - Management interface, connected to `MGT_BRIDGE_OR_NET_NAME`. Note that XAPI must be accessible through this network. - `eth1` - VM interface, connected to `VM_BRIDGE_OR_NET_NAME` - `eth2` - Public interface, connected to `PUB_BRIDGE_OR_NET_NAME` - Create a template of the VM and destroy the current VM - Create DomU VM according to the template and ssh to the VM - Create a linux service to enable devstack service after VM reboot. The service will: - Download devstack source code if needed - Call unstack.sh and stack.sh to install devstack - Reboot DomU VM Step 1: Install Xenserver ......................... Install XenServer on a clean box. You can download the latest XenServer for free from: http://www.xenserver.org/ The XenServer IP configuration depends on your local network setup. If you are using dhcp, make a reservation for XenServer, so its IP address won't change over time. Make a note of the XenServer's IP address, as it has to be specified in `local.conf`. The other option is to manually specify the IP setup for the XenServer box. Please make sure, that a gateway and a nameserver is configured, as `install-devstack-xen.sh` will connect to github.com to get source-code snapshots. OpenStack currently only supports file-based (thin provisioned) SR types EXT and NFS. As such the default SR should either be a local EXT SR or a remote NFS SR. To create a local EXT SR use the "Optimised for XenDesktop" option in the XenServer host installer. Step 2: Download install-devstack-xen.sh ........................................ On your remote linux client, get the install script from https://raw.githubusercontent.com/openstack/os-xenapi/master/tools/install-devstack-xen.sh Step 3: local.conf overview ........................... Devstack uses a local.conf for user-specific configuration. install-devstack-xen provides a configuration file which is suitable for many simple use cases. In more advanced use cases, you may need to configure the local.conf file after installation - or use the second approach outlined above to bypass the install-devstack-xen script. local.conf sample:: [[local|localrc]] enable_plugin os-xenapi https://github.com/openstack/os-xenapi.git # Passwords MYSQL_PASSWORD=citrix SERVICE_TOKEN=citrix ADMIN_PASSWORD=citrix SERVICE_PASSWORD=citrix RABBIT_PASSWORD=citrix GUEST_PASSWORD=citrix XENAPI_PASSWORD="$XENSERVER_PASS" SWIFT_HASH="66a3d6b56c1f479c8b4e70ab5c2000f5" # Do not use secure delete CINDER_SECURE_DELETE=False # Compute settings VIRT_DRIVER=xenserver # Tempest settings TERMINATE_TIMEOUT=90 BUILD_TIMEOUT=600 # DevStack settings LOGDIR=${LOGDIR} LOGFILE=${LOGDIR}/stack.log # Turn on verbosity (password input does not work otherwise) VERBOSE=True # XenAPI specific XENAPI_CONNECTION_URL="http://$XENSERVER_IP" VNCSERVER_PROXYCLIENT_ADDRESS="$XENSERVER_IP" # Neutron specific part ENABLED_SERVICES+=neutron,q-domua Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan,flat Q_ML2_TENANT_NETWORK_TYPE=vxlan VLAN_INTERFACE=eth1 PUBLIC_INTERFACE=eth2 Step 4: Run `./install-devstack-xen.sh` on your remote linux client ................................................................... An example:: # Create a passwordless ssh key ssh-keygen -t rsa -N "" -f devstack_key.priv # Install devstack ./install-devstack-xen.sh XENSERVER mypassword devstack_key.priv If you don't select wait till launch (using "-w 0" option), once this script finishes executing, login the VM (DevstackOSDomU) that it installed and tail the /opt/stack/devstack_logs/stack.log file. You will need to wait until it stack.log has finished executing. Appendix ________ This section contains useful information for using specific ubuntu network mirrors, which may be required for specific environments to resolve specific access or performance issues. As these are advanced options, the "install-devstack-xen" approach does not support them. If you wish to use these options, please follow the approach outlined above which involves manually downloading os-xenapi and configuring local.conf (or xenrc in the below cases) Using a specific Ubuntu mirror for installation ............................................... To speed up the Ubuntu installation, you can use a specific mirror. To specify a mirror explicitly, include the following settings in your `xenrc` file: sample code:: UBUNTU_INST_HTTP_HOSTNAME="archive.ubuntu.com" UBUNTU_INST_HTTP_DIRECTORY="/ubuntu" These variables set the `mirror/http/hostname` and `mirror/http/directory` settings in the ubuntu preseed file. The minimal ubuntu VM will use the specified parameters. Use an http proxy to speed up Ubuntu installation ................................................. To further speed up the Ubuntu VM and package installation, an internal http proxy could be used. `squid-deb-proxy` has proven to be stable. To use an http proxy, specify the following in your `xenrc` file: sample code:: UBUNTU_INST_HTTP_PROXY="http://ubuntu-proxy.somedomain.com:8000" Exporting the Ubuntu VM to an XVA ********************************* Assuming you have an nfs export, `TEMPLATE_NFS_DIR`, the following sample code will export the jeos (just enough OS) template to an XVA that can be re-imported at a later date. sample code:: TEMPLATE_FILENAME=devstack-jeos.xva TEMPLATE_NAME=jeos_template_for_ubuntu mountdir=$(mktemp -d) mount -t nfs "$TEMPLATE_NFS_DIR" "$mountdir" VM="$(xe template-list name-label="$TEMPLATE_NAME" --minimal)" xe template-export template-uuid=$VM filename="$mountdir/$TEMPLATE_FILENAME" umount "$mountdir" rm -rf "$mountdir" Import the Ubuntu VM ******************** Given you have an nfs export `TEMPLATE_NFS_DIR` where you exported the Ubuntu VM as `TEMPLATE_FILENAME`: sample code:: mountdir=$(mktemp -d) mount -t nfs "$TEMPLATE_NFS_DIR" "$mountdir" xe vm-import filename="$mountdir/$TEMPLATE_FILENAME" umount "$mountdir" rm -rf "$mountdir" os-xenapi-0.3.1/.testr.conf0000664000175000017500000000047713160424533016643 0ustar jenkinsjenkins00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \ ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list