sahara-12.0.0/0000775000175000017500000000000013656752227013072 5ustar zuulzuul00000000000000sahara-12.0.0/doc/0000775000175000017500000000000013656752227013637 5ustar zuulzuul00000000000000sahara-12.0.0/doc/requirements.txt0000664000175000017500000000064013656752032017115 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. openstackdocstheme>=1.31.2 # Apache-2.0 os-api-ref>=1.6.0 # Apache-2.0 reno>=2.5.0 # Apache-2.0 sphinx!=1.6.6,!=1.6.7,!=2.1.0,>=1.6.2 # BSD sphinxcontrib-httpdomain>=1.3.0 # BSD whereto>=0.3.0 # Apache-2.0 sahara-12.0.0/doc/source/0000775000175000017500000000000013656752227015137 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/config-generator.conf0000664000175000017500000000061613656752032021234 0ustar zuulzuul00000000000000[DEFAULT] wrap_width = 79 namespace = sahara.config namespace = keystonemiddleware.auth_token namespace = oslo.concurrency namespace = oslo.db namespace = oslo.log namespace = oslo.messaging namespace = oslo.middleware.cors namespace = oslo.middleware.http_proxy_to_wsgi namespace = oslo.policy namespace = oslo.service.periodic_task namespace = oslo.service.sslutils namespace = oslo.service.wsgi sahara-12.0.0/doc/source/_extra/0000775000175000017500000000000013656752227016421 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/_extra/.htaccess0000664000175000017500000000152613656752032020215 0ustar zuulzuul00000000000000# renamed after the switch to Storyboard redirectmatch 301 ^/sahara/([^/]+)/contributor/launchpad.html$ /sahara/$1/contributor/project.html # renamed after some documentation reshuffling redirectmatch 301 ^/sahara/(?!ocata|pike|queens)([^/]+)/user/vanilla-imagebuilder.html$ /sahara/$1/user/vanilla-plugin.html redirectmatch 301 ^/sahara/(?!ocata|pike|queens)([^/]+)/user/cdh-imagebuilder.html$ /sahara/$1/user/cdh-plugin.html redirectmatch 301 ^/sahara/(?!ocata|pike|queens)([^/]+)/user/guest-requirements.html$ /sahara/$1/user/building-guest-images.html redirectmatch 301 ^/sahara/([^/]+)/user/([^-]+)-plugin.html$ /sahara-plugin-$2/$1/ redirectmatch 301 ^/sahara/([^/]+)/contributor/how-to-participate.html$ /sahara/$1/contributor/contributing.html redirectmatch 301 ^/sahara/([^/]+)/contributor/project.html$ /sahara/$1/contributor/contributing.html sahara-12.0.0/doc/source/cli/0000775000175000017500000000000013656752227015706 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/cli/index.rst0000664000175000017500000000031413656752032017537 0ustar zuulzuul00000000000000======================== Sahara CLI Documentation ======================== In this section you will find information on Sahara’s command line interface. .. toctree:: :maxdepth: 1 sahara-status sahara-12.0.0/doc/source/cli/sahara-status.rst0000664000175000017500000000363713656752032021223 0ustar zuulzuul00000000000000============= sahara-status ============= ---------------------------------------- CLI interface for Sahara status commands ---------------------------------------- Synopsis ======== :: sahara-status [] Description =========== :program:`sahara-status` is a tool that provides routines for checking the status of a Sahara deployment. Options ======= The standard pattern for executing a :program:`sahara-status` command is:: sahara-status [] Run without arguments to see a list of available command categories:: sahara-status Categories are: * ``upgrade`` Detailed descriptions are below: You can also run with a category argument such as ``upgrade`` to see a list of all commands in that category:: sahara-status upgrade These sections describe the available categories and arguments for :program:`sahara-status`. Upgrade ~~~~~~~ .. _sahara-status-checks: ``sahara-status upgrade check`` Performs a release-specific readiness check before restarting services with new code. For example, missing or changed configuration options, incompatible object states, or other conditions that could lead to failures while upgrading. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - All upgrade readiness checks passed successfully and there is nothing to do. * - 1 - At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. * - 2 - There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. * - 255 - An unexpected error occurred. **History of Checks** **10.0.0 (Stein)** * Sample check to be filled in with checks as they are added in Stein. sahara-12.0.0/doc/source/reference/0000775000175000017500000000000013656752227017075 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/reference/restapi.rst0000664000175000017500000000757413656752032021305 0ustar zuulzuul00000000000000Sahara REST API v1.1 ******************** 1 General API information ========================= This section contains base info about the sahara REST API design. 1.1 Authentication and Authorization ------------------------------------ The sahara API uses the OpenStack Identity service as the default authentication service. When the Identity service is enabled, users who submit requests to the sahara service must provide an authentication token in the ``X-Auth-Token`` request header. A user can obtain the token by authenticating to the Identity service endpoint. For more information about the Identity service, please see the :keystone-doc:`keystone project developer documentation <>`. With each request, a user must specify the keystone project in the url path, for example: '/v1.1/{project_id}/clusters'. Sahara will perform the requested operation in the specified project using the provided credentials. Therefore, clusters may be created and managed only within projects to which the user has access. 1.2 Request / Response Types ---------------------------- The sahara API supports the JSON data serialization format. This means that for requests that contain a body, the ``Content-Type`` header must be set to the MIME type value ``application/json``. Also, clients should accept JSON serialized responses by specifying the ``Accept`` header with the MIME type value ``application/json`` or adding the ``.json`` extension to the resource name. The default response format is ``application/json`` if the client does not specify an ``Accept`` header or append the ``.json`` extension in the URL path. Example: .. sourcecode:: text GET /v1.1/{project_id}/clusters.json or .. sourcecode:: text GET /v1.1/{project_id}/clusters Accept: application/json 1.3 Navigation by response -------------------------- Sahara API supports delivering response data by pages. User can pass two parameters in API GET requests which return an array of objects. The parameters are: ``limit`` - maximum number of objects in response data. This parameter must be a positive integer number. ``marker`` - ID of the last element on the list which won't be in response. Example: Get 15 clusters after cluster with id=d62ad147-5c10-418c-a21a-3a6597044f29: .. sourcecode:: text GET /v1.1/{project_id}/clusters?limit=15&marker=d62ad147-5c10-418c-a21a-3a6597044f29 For convenience, response contains markers of previous and following pages which are named 'prev' and 'next' fields. Also there is ``sort_by`` parameter for sorting objects. Sahara API supports ascending and descending sorting. Examples: Sort clusters by name: .. sourcecode:: text GET /v1.1/{project_id}/clusters?sort_by=name Sort clusters by date of creation in descending order: .. sourcecode:: text GET /v1.1/{project_id}/clusters?sort_by=-created_at 1.4 Faults ---------- The sahara API returns an error response if a failure occurs while processing a request. Sahara uses only standard HTTP error codes. 4xx errors indicate problems in the particular request being sent from the client and 5xx errors indicate server-side problems. The response body will contain richer information about the cause of the error. An error response follows the format illustrated by the following example: .. sourcecode:: http HTTP/1.1 400 BAD REQUEST Content-type: application/json Content-length: 126 { "error_name": "CLUSTER_NAME_ALREADY_EXISTS", "error_message": "Cluster with name 'test-cluster' already exists", "error_code": 400 } The ``error_code`` attribute is an HTTP response code. The ``error_name`` attribute indicates the generic error type without any concrete ids or names, etc. The last attribute, ``error_message``, contains a human readable error description. 2 API ===== - `Sahara REST API Reference (OpenStack API Complete Reference - DataProcessing) `_ sahara-12.0.0/doc/source/reference/index.rst0000664000175000017500000000034413656752032020731 0ustar zuulzuul00000000000000===================== Programming Reference ===================== Plugins and EDP =============== .. toctree:: :maxdepth: 2 plugins plugin-spi edp-spi REST API ======== .. toctree:: :maxdepth: 2 restapi sahara-12.0.0/doc/source/reference/edp-spi.rst0000664000175000017500000001501613656752032021165 0ustar zuulzuul00000000000000Elastic Data Processing (EDP) SPI ================================= The EDP job engine objects provide methods for creating, monitoring, and terminating jobs on Sahara clusters. Provisioning plugins that support EDP must return an EDP job engine object from the :ref:`get_edp_engine` method described in :doc:`plugin-spi`. Sahara provides subclasses of the base job engine interface that support EDP on clusters running Oozie, Spark, and/or Storm. These are described below. .. _edp_spi_job_types: Job Types --------- Some of the methods below test job type. Sahara supports the following string values for job types: * Hive * Java * Pig * MapReduce * MapReduce.Streaming * Spark * Shell * Storm .. note:: Constants for job types are defined in *sahara.utils.edp*. Job Status Values ----------------- Several of the methods below return a job status value. A job status value is a dictionary of the form: {'status': *job_status_value*} where *job_status_value* is one of the following string values: * DONEWITHERROR * FAILED * TOBEKILLED * KILLED * PENDING * RUNNING * SUCCEEDED Note, constants for job status are defined in *sahara.utils.edp* EDP Job Engine Interface ------------------------ The sahara.service.edp.base_engine.JobEngine class is an abstract class with the following interface: cancel_job(job_execution) ~~~~~~~~~~~~~~~~~~~~~~~~~ Stops the running job whose id is stored in the job_execution object. *Returns*: None if the operation was unsuccessful or an updated job status value. get_job_status(job_execution) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns the current status of the job whose id is stored in the job_execution object. *Returns*: a job status value. run_job(job_execution) ~~~~~~~~~~~~~~~~~~~~~~ Starts the job described by the job_execution object *Returns*: a tuple of the form (job_id, job_status_value, job_extra_info). * *job_id* is required and must be a string that allows the EDP engine to uniquely identify the job. * *job_status_value* may be None or a job status value * *job_extra_info* may be None or optionally a dictionary that the EDP engine uses to store extra information on the job_execution_object. validate_job_execution(cluster, job, data) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Checks whether or not the job can run on the cluster with the specified data. Data contains values passed to the */jobs//execute* REST API method during job launch. If the job cannot run for any reason, including job configuration, cluster configuration, or invalid data, this method should raise an exception. *Returns*: None get_possible_job_config(job_type) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns hints used by the Sahara UI to prompt users for values when configuring and launching a job. Note that no hints are required. See :doc:`../user/edp` for more information on how configuration values, parameters, and arguments are used by different job types. *Returns*: a dictionary of the following form, containing hints for configs, parameters, and arguments for the job type: {'job_config': {'configs': [], 'params': {}, 'args': []}} * *args* is a list of strings * *params* contains simple key/value pairs * each item in *configs* is a dictionary with entries for 'name' (required), 'value', and 'description' get_supported_job_types() ~~~~~~~~~~~~~~~~~~~~~~~~~ This method returns the job types that the engine supports. Not all engines will support all job types. *Returns*: a list of job types supported by the engine. Oozie Job Engine Interface -------------------------- The sahara.service.edp.oozie.engine.OozieJobEngine class is derived from JobEngine. It provides implementations for all of the methods in the base interface but adds a few more abstract methods. Note that the *validate_job_execution(cluster, job, data)* method does basic checks on the job configuration but probably should be overloaded to include additional checks on the cluster configuration. For example, the job engines for plugins that support Oozie add checks to make sure that the Oozie service is up and running. get_hdfs_user() ~~~~~~~~~~~~~~~ Oozie uses HDFS to distribute job files. This method gives the name of the account that is used on the data nodes to access HDFS (such as 'hadoop' or 'hdfs'). The Oozie job engine expects that HDFS contains a directory for this user under */user/*. *Returns*: a string giving the username for the account used to access HDFS on the cluster. create_hdfs_dir(remote, dir_name) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The remote object *remote* references a node in the cluster. This method creates the HDFS directory *dir_name* under the user specified by *get_hdfs_user()* in the HDFS accessible from the specified node. For example, if the HDFS user is 'hadoop' and the dir_name is 'test' this method would create '/user/hadoop/test'. The reason that this method is broken out in the interface as an abstract method is that different versions of Hadoop treat path creation differently. *Returns*: None get_oozie_server_uri(cluster) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns the full URI for the Oozie server, for example *http://my_oozie_host:11000/oozie*. This URI is used by an Oozie client to send commands and queries to the Oozie server. *Returns*: a string giving the Oozie server URI. get_oozie_server(self, cluster) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns the node instance for the host in the cluster running the Oozie server. *Returns*: a node instance. get_name_node_uri(self, cluster) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns the full URI for the Hadoop NameNode, for example *http://master_node:8020*. *Returns*: a string giving the NameNode URI. get_resource_manager_uri(self, cluster) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns the full URI for the Hadoop JobTracker for Hadoop version 1 or the Hadoop ResourceManager for Hadoop version 2. *Returns*: a string giving the JobTracker or ResourceManager URI. Spark Job Engine ---------------- The sahara.service.edp.spark.engine.SparkJobEngine class provides a full EDP implementation for Spark standalone clusters. .. note:: The *validate_job_execution(cluster, job, data)* method does basic checks on the job configuration but probably should be overloaded to include additional checks on the cluster configuration. For example, the job engine returned by the Spark plugin checks that the Spark version is >= 1.0.0 to ensure that *spark-submit* is available. get_driver_classpath(self) ~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns driver class path. *Returns*: a string of the following format ' --driver-class-path *class_path_value*'. sahara-12.0.0/doc/source/reference/plugins.rst0000664000175000017500000000227313656752032021306 0ustar zuulzuul00000000000000Pluggable Provisioning Mechanism ================================ Sahara can be integrated with 3rd party management tools like Apache Ambari and Cloudera Management Console. The integration is achieved using the plugin mechanism. In short, responsibilities are divided between the Sahara core and a plugin as follows. Sahara interacts with the user and uses Heat to provision OpenStack resources (VMs, baremetal servers, security groups, etc.) The plugin installs and configures a Hadoop cluster on the provisioned instances. Optionally, a plugin can deploy management and monitoring tools for the cluster. Sahara provides plugins with utility methods to work with provisioned instances. A plugin must extend the `sahara.plugins.provisioning:ProvisioningPluginBase` class and implement all the required methods. Read :doc:`plugin-spi` for details. The `instance` objects provided by Sahara have a `remote` property which can be used to interact with instances. The `remote` is a context manager so you can use it in `with instance.remote:` statements. The list of available commands can be found in `sahara.utils.remote.InstanceInteropHelper`. See the source code of the Vanilla plugin for usage examples. sahara-12.0.0/doc/source/reference/plugin-spi.rst0000664000175000017500000004435213656752032021720 0ustar zuulzuul00000000000000Plugin SPI ========== Plugin interface ---------------- get_versions() ~~~~~~~~~~~~~~ Returns all available versions of the plugin. Depending on the plugin, this version may map directly to the HDFS version, or it may not; check your plugin's documentation. It is responsibility of the plugin to make sure that all required images for each hadoop version are available, as well as configs and whatever else that plugin needs to create the Hadoop cluster. *Returns*: list of strings representing plugin versions *Example return value*: ["1.2.1", "2.3.0", "2.4.1"] get_configs( hadoop_version ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lists all configs supported by the plugin with descriptions, defaults, and targets for which this config is applicable. *Returns*: list of configs *Example return value*: (("JobTracker heap size", "JobTracker heap size, in MB", "int", "512", `"mapreduce"`, "node", True, 1)) get_node_processes( hadoop_version ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns all supported services and node processes for a given Hadoop version. Each node process belongs to a single service and that relationship is reflected in the returned dict object. See example for details. *Returns*: dictionary having entries (service -> list of processes) *Example return value*: {"mapreduce": ["tasktracker", "jobtracker"], "hdfs": ["datanode", "namenode"]} get_required_image_tags( hadoop_version ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lists tags that should be added to OpenStack Image via Image Registry. Tags are used to filter Images by plugin and hadoop version. *Returns*: list of tags *Example return value*: ["tag1", "some_other_tag", ...] validate( cluster ) ~~~~~~~~~~~~~~~~~~~ Validates a given cluster object. Raises a *SaharaException* with a meaningful message in the case of validation failure. *Returns*: None *Example exception*: validate_scaling( cluster, existing, additional ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To be improved. Validates a given cluster before scaling operation. *Returns*: list of validation_errors update_infra( cluster ) ~~~~~~~~~~~~~~~~~~~~~~~ This method is no longer used now that Sahara utilizes Heat for OpenStack resource provisioning, and is not currently utilized by any plugin. *Returns*: None configure_cluster( cluster ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Configures cluster on the VMs provisioned by sahara. In this function the plugin should perform all actions like adjusting OS, installing required packages (including Hadoop, if needed), configuring Hadoop, etc. *Returns*: None start_cluster( cluster ) ~~~~~~~~~~~~~~~~~~~~~~~~ Start already configured cluster. This method is guaranteed to be called only on a cluster which was already prepared with configure_cluster(...) call. *Returns*: None scale_cluster( cluster, instances ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Scale an existing cluster with additional instances. The instances argument is a list of ready-to-configure instances. Plugin should do all configuration operations in this method and start all services on those instances. *Returns*: None .. _get_edp_engine: get_edp_engine( cluster, job_type ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Returns an EDP job engine object that supports the specified job_type on the given cluster, or None if there is no support. The EDP job engine object returned must implement the interface described in :doc:`edp-spi`. The job_type is a String matching one of the job types listed in :ref:`edp_spi_job_types`. *Returns*: an EDP job engine object or None decommission_nodes( cluster, instances ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Scale cluster down by removing a list of instances. The plugin should stop services on the provided list of instances. The plugin also may need to update some configurations on other instances when nodes are removed; if so, this method must perform that reconfiguration. *Returns*: None on_terminate_cluster( cluster ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When user terminates cluster, sahara simply shuts down all the cluster VMs. This method is guaranteed to be invoked before that, allowing the plugin to do some clean-up. *Returns*: None get_open_ports( node_group ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When user requests sahara to automatically create a security group for the node group (``auto_security_group`` property set to True), sahara will call this plugin method to get a list of ports that need to be opened. *Returns*: list of ports to be open in auto security group for the given node group get_edp_job_types( versions ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Optional method, which provides the ability to see all supported job types for specified plugin versions. *Returns*: dict with supported job types for specified versions of plugin recommend_configs( self, cluster, scaling=False ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Optional method, which provides recommendations for cluster configuration before creating/scaling operation. get_image_arguments( self, hadoop_version ): ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Optional method, which gets the argument set taken by the plugin's image generator, or NotImplemented if the plugin does not provide image generation support. See :doc:`../contributor/image-gen`. *Returns*: A sequence with items of type sahara.plugins.images.ImageArgument. pack_image( self, hadoop_version, remote, test_only=False, ... ): ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Optional method which packs an image for registration in Glance and use by Sahara. This method is called from the image generation CLI rather than from the Sahara api or engine service. See :doc:`../contributor/image-gen`. *Returns*: None (modifies the image pointed to by the remote in-place.) validate_images( self, cluster, test_only=False, image_arguments=None ): ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Validates the image to be used to create a cluster, to ensure that it meets the specifications of the plugin. See :doc:`../contributor/image-gen`. *Returns*: None; may raise a sahara.plugins.exceptions.ImageValidationError Object Model ============ Here is a description of all the objects involved in the API. Notes: - clusters and node_groups have 'extra' fields allowing the plugin to persist any supplementary info about the cluster. - node_process is just a process that runs on some node in cluster. Example list of node processes: 1. jobtracker 2. namenode 3. tasktracker 4. datanode - Each plugin may have different names for the same processes. Config ------ An object, describing one configuration entry +-------------------+--------+------------------------------------------------+ | Property | Type | Description | +===================+========+================================================+ | name | string | Config name. | +-------------------+--------+------------------------------------------------+ | description | string | A hint for user, what this config is used for. | +-------------------+--------+------------------------------------------------+ | config_type | enum | possible values are: 'string', 'integer', | | | | 'boolean', 'enum'. | +-------------------+--------+------------------------------------------------+ | config_values | list | List of possible values, if config_type is | | | | enum. | +-------------------+--------+------------------------------------------------+ | default_value | string | Default value for config. | +-------------------+--------+------------------------------------------------+ | applicable_target | string | The target could be either a service returned | | | | by get_node_processes(...) call | | | | in form of 'service:', or | | | | 'general'. | +-------------------+--------+------------------------------------------------+ | scope | enum | Could be either 'node' or 'cluster'. | +-------------------+--------+------------------------------------------------+ | is_optional | bool | If is_optional is False and no default_value | | | | is specified, user must provide a value. | +-------------------+--------+------------------------------------------------+ | priority | int | 1 or 2. A Hint for UI. Configs with priority | | | | *1* are always displayed. | | | | Priority *2* means user should click a button | | | | to see the config. | +-------------------+--------+------------------------------------------------+ User Input ---------- Value provided by user for a specific config. +----------+--------+--------------------------------------------------------+ | Property | Type | Description | +==========+========+========================================================+ | config | config | A config object for which this user_input is provided. | +----------+--------+--------------------------------------------------------+ | value | ... | Value for the config. Type depends on Config type. | +----------+--------+--------------------------------------------------------+ Instance -------- An instance created for cluster. +---------------+---------+---------------------------------------------------+ | Property | Type | Description | +===============+=========+===================================================+ | instance_id | string | Unique instance identifier. | +---------------+---------+---------------------------------------------------+ | instance_name | string | OpenStack instance name. | +---------------+---------+---------------------------------------------------+ | internal_ip | string | IP to communicate with other instances. | +---------------+---------+---------------------------------------------------+ | management_ip | string | IP of instance, accessible outside of internal | | | | network. | +---------------+---------+---------------------------------------------------+ | volumes | list | List of volumes attached to instance. Empty if | | | | ephemeral drive is used. | +---------------+---------+---------------------------------------------------+ | nova_info | object | Nova instance object. | +---------------+---------+---------------------------------------------------+ | username | string | Username, that sahara uses for establishing | | | | remote connections to instance. | +---------------+---------+---------------------------------------------------+ | hostname | string | Same as instance_name. | +---------------+---------+---------------------------------------------------+ | fqdn | string | Fully qualified domain name for this instance. | +---------------+---------+---------------------------------------------------+ | remote | helpers | Object with helpers for performing remote | | | | operations. | +---------------+---------+---------------------------------------------------+ Node Group ---------- Group of instances. +----------------------+--------+---------------------------------------------+ | Property | Type | Description | +======================+========+=============================================+ | name | string | Name of this Node Group in Cluster. | +----------------------+--------+---------------------------------------------+ | flavor_id | string | OpenStack Flavor used to boot instances. | +----------------------+--------+---------------------------------------------+ | image_id | string | Image id used to boot instances. | +----------------------+--------+---------------------------------------------+ | node_processes | list | List of processes running on each instance. | +----------------------+--------+---------------------------------------------+ | node_configs | dict | Configs dictionary, applied to instances. | +----------------------+--------+---------------------------------------------+ | volumes_per_node | int | Number of volumes mounted to each instance. | | | | 0 means use ephemeral drive. | +----------------------+--------+---------------------------------------------+ | volumes_size | int | Size of each volume (GB). | +----------------------+--------+---------------------------------------------+ | volumes_mount_prefix | string | Prefix added to mount path of each volume. | +----------------------+--------+---------------------------------------------+ | floating_ip_pool | string | Floating IP Pool name. All instances in the | | | | Node Group will have Floating IPs assigned | | | | from this pool. | +----------------------+--------+---------------------------------------------+ | count | int | Number of instances in this Node Group. | +----------------------+--------+---------------------------------------------+ | username | string | Username used by sahara to establish remote | | | | connections to instances. | +----------------------+--------+---------------------------------------------+ | configuration | dict | Merged dictionary of node configurations | | | | and cluster configurations. | +----------------------+--------+---------------------------------------------+ | storage_paths | list | List of directories where storage should be | | | | placed. | +----------------------+--------+---------------------------------------------+ Cluster ------- Contains all relevant info about cluster. This object is is provided to the plugin for both cluster creation and scaling. The "Cluster Lifecycle" section below further specifies which fields are filled at which moment. +----------------------------+--------+---------------------------------------+ | Property | Type | Description | +============================+========+=======================================+ | name | string | Cluster name. | +----------------------------+--------+---------------------------------------+ | project_id | string | OpenStack Project id where this | | | | Cluster is available. | +----------------------------+--------+---------------------------------------+ | plugin_name | string | Plugin name. | +----------------------------+--------+---------------------------------------+ | hadoop_version | string | Hadoop version running on instances. | +----------------------------+--------+---------------------------------------+ | default_image_id | string | OpenStack image used to boot | | | | instances. | +----------------------------+--------+---------------------------------------+ | node_groups | list | List of Node Groups. | +----------------------------+--------+---------------------------------------+ | cluster_configs | dict | Dictionary of Cluster scoped | | | | configurations. | +----------------------------+--------+---------------------------------------+ | cluster_template_id | string | Cluster Template used for Node Groups | | | | and Configurations. | +----------------------------+--------+---------------------------------------+ | user_keypair_id | string | OpenStack keypair added to instances | | | | to make them accessible for user. | +----------------------------+--------+---------------------------------------+ | neutron_management_network | string | Neutron network ID. Instances will | | | | get fixed IPs in this network. | +----------------------------+--------+---------------------------------------+ | anti_affinity | list | List of processes that will be run on | | | | different hosts. | +----------------------------+--------+---------------------------------------+ | description | string | Cluster Description. | +----------------------------+--------+---------------------------------------+ | info | dict | Dictionary for additional information.| +----------------------------+--------+---------------------------------------+ Validation Error ---------------- Describes what is wrong with one of the values provided by user. +---------------+--------+-----------------------------------------------+ | Property | Type | Description | +===============+========+===============================================+ | config | config | A config object that is not valid. | +---------------+--------+-----------------------------------------------+ | error_message | string | Message that describes what exactly is wrong. | +---------------+--------+-----------------------------------------------+ sahara-12.0.0/doc/source/_theme_rtd/0000775000175000017500000000000013656752227017251 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/_theme_rtd/theme.conf0000664000175000017500000000010713656752032021212 0ustar zuulzuul00000000000000[theme] inherit = nature stylesheet = nature.css pygments_style = tangosahara-12.0.0/doc/source/_theme_rtd/layout.html0000664000175000017500000000020513656752032021443 0ustar zuulzuul00000000000000{% extends "basic/layout.html" %} {% set css_files = css_files + ['_static/tweaks.css'] %} {% block relbar1 %}{% endblock relbar1 %}sahara-12.0.0/doc/source/conf.py0000664000175000017500000002126413656752032016435 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) sys.path.insert(0, os.path.abspath('../../sahara')) sys.path.append(os.path.abspath('..')) sys.path.append(os.path.abspath('../bin')) # -- General configuration ----------------------------------------------------- on_rtd = os.environ.get('READTHEDOCS', None) == 'True' # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.viewcode', 'sphinxcontrib.httpdomain', 'oslo_config.sphinxconfiggen', 'oslo_config.sphinxext', 'openstackdocstheme'] # openstackdocstheme options repository_name = 'openstack/sahara' use_storyboard = True config_generator_config_file = 'config-generator.conf' config_sample_basename = 'sahara' openstack_projects = [ 'barbican', 'castellan', 'designate', 'devstack', 'ironic', 'keystone', 'keystoneauth', 'kolla-ansible', 'neutron', 'nova', 'oslo.messaging', 'oslo.middleware', 'sahara-plugin-ambari', 'sahara-plugin-cdh', 'sahara-plugin-mapr', 'sahara-plugin-spark', 'sahara-plugin-storm', 'sahara-plugin-vanilla', 'tooz' ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # Add any paths that contain "extra" files, such as .htaccess or # robots.txt. html_extra_path = ['_extra'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2014, OpenStack Foundation' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. if on_rtd: html_theme_path = ['.'] html_theme = '_theme_rtd' html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = {"show_other_versions": "True",} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". html_title = 'Sahara' # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = { 'index': ['sidebarlinks.html', 'localtoc.html', 'searchbox.html', 'sourcelink.html'], '**': ['localtoc.html', 'relations.html', 'searchbox.html', 'sourcelink.html'] } # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'SaharaDoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'saharadoc.tex', u'Sahara', u'OpenStack Foundation', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'sahara', u'Sahara', [u'OpenStack Foundation'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'Sahara', u'Sahara', u'OpenStack Foundation', 'Sahara', 'Sahara', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' sahara-12.0.0/doc/source/intro/0000775000175000017500000000000013656752227016272 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/intro/overview.rst0000664000175000017500000002037413656752032020672 0ustar zuulzuul00000000000000Rationale ========= Introduction ------------ Apache Hadoop is an industry standard and widely adopted MapReduce implementation, it is one among a growing number of data processing frameworks. The aim of this project is to enable users to easily provision and manage clusters with Hadoop and other data processing frameworks on OpenStack. It is worth mentioning that Amazon has provided Hadoop for several years as Amazon Elastic MapReduce (EMR) service. Sahara aims to provide users with a simple means to provision Hadoop, Spark, and Storm clusters by specifying several parameters such as the framework version, cluster topology, hardware node details and more. After a user fills in all the parameters, sahara deploys the cluster in a few minutes. Also sahara provides means to scale an already provisioned cluster by adding or removing worker nodes on demand. The solution will address the following use cases: * fast provisioning of data processing clusters on OpenStack for development and quality assurance(QA). * utilization of unused compute power from a general purpose OpenStack IaaS cloud. * "Analytics as a Service" for ad-hoc or bursty analytic workloads (similar to AWS EMR). Key features are: * designed as an OpenStack component. * managed through a REST API with a user interface(UI) available as part of OpenStack Dashboard. * support for a variety of data processing frameworks: * multiple Hadoop vendor distributions. * Apache Spark and Storm. * pluggable system of Hadoop installation engines. * integration with vendor specific management tools, such as Apache Ambari and Cloudera Management Console. * predefined configuration templates with the ability to modify parameters. Details ------- The sahara product communicates with the following OpenStack services: * Dashboard (horizon) - provides a GUI with ability to use all of sahara's features. * Identity (keystone) - authenticates users and provides security tokens that are used to work with OpenStack, limiting a user's abilities in sahara to their OpenStack privileges. * Compute (nova) - used to provision VMs for data processing clusters. * Bare metal (ironic) - used to provision Baremetal nodes for data processing clusters. * Orchestration (heat) - used to provision and orchestrate the deployment of data processing clusters. * Image (glance) - stores VM images, each image containing an operating system and a pre-installed data processing distribution or framework. * Object Storage (swift) - can be used as storage for job binaries and data that will be processed or created by framework jobs. * Block Storage (cinder) - can be used to provision block storage for VM instances. * Networking (neutron) - provides networking services to data processing clusters. * DNS service (designate) - provides ability to communicate with cluster instances and Hadoop services by their hostnames. * Telemetry (ceilometer) - used to collect measures of cluster usage for metering and monitoring purposes. * Shared file systems (manila) - can be used for storage of framework job binaries and data that will be processed or created by jobs. * Key manager (barbican & castellan) - persists the authentication data like passwords and private keys in a secure storage. .. image:: ../images/openstack-interop.png :width: 960 :height: 720 :scale: 83 % :align: left General Workflow ---------------- Sahara will provide two levels of abstraction for the API and UI based on the addressed use cases: cluster provisioning and analytics as a service. For fast cluster provisioning a generic workflow will be as following: * select a Hadoop (or framework) version. * select a base image with or without pre-installed data processing framework: * for base images without a pre-installed framework, sahara will support pluggable deployment engines that integrate with vendor tooling. * define cluster configuration, including cluster size, topology, and framework parameters (for example, heap size): * to ease the configuration of such parameters, configurable templates are provided. * provision the cluster; sahara will provision nodes (VMs or baremetal), install and configure the data processing framework. * perform operations on the cluster; add or remove nodes. * terminate the cluster when it is no longer needed. For analytics as a service, a generic workflow will be as following: * select one of the predefined data processing framework versions. * configure a job: * choose the type of job: pig, hive, jar-file, etc. * provide the job script source or jar location. * select input and output data location. * set the limit for the cluster size. * execute the job: * all cluster provisioning and job execution will happen transparently to the user. * if using a transient cluster, it will be removed automatically after job completion. * get the results of computations (for example, from swift). User's Perspective ------------------ While provisioning clusters through sahara, the user operates on three types of entities: Node Group Templates, Cluster Templates and Clusters. A Node Group Template describes a group of nodes within cluster. It contains a list of processes that will be launched on each instance in a group. Also a Node Group Template may provide node scoped configurations for those processes. This kind of template encapsulates hardware parameters (flavor) for the node instance and configuration for data processing framework processes running on the node. A Cluster Template is designed to bring Node Group Templates together to form a Cluster. A Cluster Template defines what Node Groups will be included and how many instances will be created for each. Some data processing framework configurations can not be applied to a single node, but to a whole Cluster. A user can specify these kinds of configurations in a Cluster Template. Sahara enables users to specify which processes should be added to an anti-affinity group within a Cluster Template. If a process is included into an anti-affinity group, it means that instances where this process is going to be launched should be scheduled to different hardware hosts. The Cluster entity represents a collection of instances that all have the same data processing framework installed. It is mainly characterized by an image with a pre-installed framework which will be used for cluster deployment. Users may choose one of the pre-configured Cluster Templates to start a Cluster. To get access to instances after a Cluster has started, the user should specify a keypair. Sahara provides several constraints on cluster framework topology. You can see all constraints in the documentation for the appropriate plugin. Each Cluster belongs to an Identity service project determined by the user. Users have access only to objects located in projects they have access to. Users can edit and delete only objects they have created or exist in their projects. Naturally, admin users have full access to every object. In this manner, sahara complies with general OpenStack access policy. Integration with Object Storage ------------------------------- The swift project provides the standard Object Storage service for OpenStack environments; it is an analog of the Amazon S3 service. As a rule it is deployed on bare metal machines. It is natural to expect data processing on OpenStack to access data stored there. Sahara provides this option with a file system implementation for swift `HADOOP-8545 `_ and `Change I6b1ba25b `_ which implements the ability to list endpoints for an object, account or container. This makes it possible to integrate swift with software that relies on data locality information to avoid network overhead. To get more information on how to enable swift support see :doc:`../user/hadoop-swift`. Pluggable Deployment and Monitoring ----------------------------------- In addition to the monitoring capabilities provided by vendor-specific Hadoop management tooling, sahara provides pluggable integration with external monitoring systems such as Nagios or Zabbix. Both deployment and monitoring tools can be installed on standalone VMs, thus allowing a single instance to manage and monitor several clusters at once. sahara-12.0.0/doc/source/intro/index.rst0000664000175000017500000000030413656752032020122 0ustar zuulzuul00000000000000=============== Sahara Overview =============== General overview of Sahara. .. toctree:: :maxdepth: 2 overview architecture Roadmap sahara-12.0.0/doc/source/intro/architecture.rst0000664000175000017500000000250713656752032021504 0ustar zuulzuul00000000000000Architecture ============ .. image:: ../images/sahara-architecture.svg :width: 960 :height: 635 :scale: 83 % :align: left The Sahara architecture consists of several components: * Auth component - responsible for client authentication & authorization, communicates with the OpenStack Identity service (keystone). * DAL - Data Access Layer, persists internal models in DB. * Secure Storage Access Layer - persists the authentication data like passwords and private keys in a secure storage. * Provisioning Engine - component responsible for communication with the OpenStack Compute (nova), Orchestration (heat), Block Storage (cinder), Image (glance), and DNS (designate) services. * Vendor Plugins - pluggable mechanism responsible for configuring and launching data processing frameworks on provisioned VMs. Existing management solutions like Apache Ambari and Cloudera Management Console could be utilized for that purpose as well. * EDP - :doc:`../user/edp` responsible for scheduling and managing data processing jobs on clusters provisioned by sahara. * REST API - exposes sahara functionality via REST HTTP interface. * Python Sahara Client - like other OpenStack components, sahara has its own python client. * Sahara pages - a GUI for the sahara is located in the OpenStack Dashboard (horizon). sahara-12.0.0/doc/source/install/0000775000175000017500000000000013656752227016605 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/install/installation-guide.rst0000664000175000017500000002047013656752032023130 0ustar zuulzuul00000000000000Sahara Installation Guide ========================= We recommend installing sahara in a way that will keep your system in a consistent state. We suggest the following options: * Install via `Fuel `_ * Install via :kolla-ansible-doc:`Kolla <>` * Install via `RDO `_ * Install into a virtual environment To install with Fuel -------------------- 1. Start by following the `MOS Quickstart `_ to install and setup OpenStack. 2. Enable the sahara service during installation. To install with Kolla --------------------- 1. Start by following the :kolla-ansible-doc:`Kolla Quickstart ` to install and setup OpenStack. 2. Enable the sahara service during installation. To install with RDO ------------------- 1. Start by following the `RDO Quickstart `_ to install and setup OpenStack. 2. Install sahara: .. sourcecode:: console # yum install openstack-sahara .. 3. Configure sahara as needed. The configuration file is located in ``/etc/sahara/sahara.conf``. For details see :doc:`Sahara Configuration Guide <../admin/configuration-guide>` 4. Create the database schema: .. sourcecode:: console # sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head .. 5. Go through :ref:`common_installation_steps` and make any necessary changes. 6. Start the sahara-api and sahara-engine services: .. sourcecode:: console # systemctl start openstack-sahara-api # systemctl start openstack-sahara-engine .. 7. *(Optional)* Enable sahara services to start on boot .. sourcecode:: console # systemctl enable openstack-sahara-api # systemctl enable openstack-sahara-engine .. To install into a virtual environment ------------------------------------- 1. First you need to install a number of packages with your OS package manager. The list of packages depends on the OS you use. For Ubuntu run: .. sourcecode:: console $ sudo apt-get install python-setuptools python-virtualenv python-dev .. For Fedora: .. sourcecode:: console $ sudo yum install gcc python-setuptools python-virtualenv python-devel .. For CentOS: .. sourcecode:: console $ sudo yum install gcc python-setuptools python-devel $ sudo easy_install pip $ sudo pip install virtualenv 2. Setup a virtual environment for sahara: .. sourcecode:: console $ virtualenv sahara-venv .. This will install a python virtual environment into ``sahara-venv`` directory in your current working directory. This command does not require super user privileges and can be executed in any directory where the current user has write permissions. 3. You can get a sahara archive from ``_ and install it using pip: .. sourcecode:: console $ sahara-venv/bin/pip install 'http://tarballs.openstack.org/sahara/sahara-master.tar.gz' .. Note that ``sahara-master.tar.gz`` contains the latest changes and might not be stable at the moment. We recommend browsing ``_ and selecting the latest stable release. For installation just execute (where replace the 'release' word with release name, e.g. 'mitaka'): .. sourcecode:: console $ sahara-venv/bin/pip install 'http://tarballs.openstack.org/sahara/sahara-stable-release.tar.gz' .. For example, you can get Sahara Mitaka release by executing: .. sourcecode:: console $ sahara-venv/bin/pip install 'http://tarballs.openstack.org/sahara/sahara-stable-mitaka.tar.gz' .. 4. After installation you should create a configuration file; as seen below it is possible to generate a sample one: .. sourcecode:: console $ SAHARA_SOURCE_DIR="/path/to/sahara/source" $ pushd $SAHARA_SOURCE_DIR $ tox -e genconfig $ popd $ cp $SAHARA_SOURCE_DIR/etc/sahara/sahara.conf.sample sahara-venv/etc/sahara.conf .. Make any necessary changes to ``sahara-venv/etc/sahara.conf``. For details see :doc:`Sahara Configuration Guide <../admin/configuration-guide>` .. _common_installation_steps: Common installation steps ------------------------- The steps below are common to both the RDO and virtual environment installations of sahara. 1. If you use sahara with a MySQL database, then for storing big job binaries in the sahara internal database you must configure the size of the maximum allowed packet. Edit the ``my.cnf`` file and change the ``max_allowed_packet`` parameter as follows: .. sourcecode:: ini ... [mysqld] ... max_allowed_packet = 256M .. Then restart the mysql server to ensure these changes are active. 2. Create the database schema: .. sourcecode:: console $ sahara-venv/bin/sahara-db-manage --config-file sahara-venv/etc/sahara.conf upgrade head .. 3. Start sahara services from different terminals: .. sourcecode:: console # first terminal $ sahara-venv/bin/sahara-api --config-file sahara-venv/etc/sahara.conf # second terminal $ sahara-venv/bin/sahara-engine --config-file sahara-venv/etc/sahara.conf .. .. _register-sahara-label: 4. For sahara to be accessible in the OpenStack Dashboard and for python-saharaclient to work properly you must register sahara in the Identity service catalog. For example: .. sourcecode:: console $ openstack service create --name sahara --description \ "Sahara Data Processing" data-processing $ openstack endpoint create --region RegionOne \ data-processing public http://10.0.0.2:8386/v1.1/%\(project_id\)s $ openstack endpoint create --region RegionOne \ data-processing internal http://10.0.0.2:8386/v1.1/%\(project_id\)s $ openstack endpoint create --region RegionOne \ data-processing admin http://10.0.0.2:8386/v1.1/%\(project_id\)s .. note:: You have to install the openstack-client package in order to execute ``openstack`` command. .. 5. For more information on configuring sahara with the OpenStack Dashboard please see :doc:`dashboard-guide`. Optional installation of default templates ------------------------------------------ Sahara bundles default templates that define simple clusters for the supported plugins. These templates may optionally be added to the sahara database using a simple CLI included with sahara. The default template CLI is described in detail in a *README* file included with the sahara sources at ``/db/templates/README.rst`` but it is summarized here. Flavor id values must be specified for the default templates included with sahara. The recommended configuration values below correspond to the *m1.medium* and *m1.large* flavors in a default OpenStack installation (if these flavors have been edited, their corresponding values will be different). Values for flavor_id should be added to ``/etc/sahara/sahara.conf`` or another configuration file in the sections shown here: .. sourcecode:: ini [DEFAULT] # Use m1.medium for {flavor_id} unless specified in another section flavor_id = 2 [cdh-5-default-namenode] # Use m1.large for {flavor_id} in the cdh-5-default-namenode template flavor_id = 4 [cdh-530-default-namenode] # Use m1.large for {flavor_id} in the cdh-530-default-namenode template flavor_id = 4 The above configuration values are included in a sample configuration file at ``/plugins/default_templates/template.conf`` The command to install all of the default templates is as follows, where ``$PROJECT_ID`` should be a valid project id and the above configuration values have been set in ``myconfig``: .. sourcecode:: console $ sahara-templates --config-file /etc/sahara/sahara.conf --config-file myconfig update -t $PROJECT_ID Help is available from the ``sahara-templates`` command: .. sourcecode:: console $ sahara-templates --help $ sahara-templates update --help Notes: ------ Ensure that your operating system is not blocking the sahara port (default: 8386). You may need to configure iptables in CentOS and other Linux distributions to allow this access. To get the list of all possible options run: .. sourcecode:: console $ sahara-venv/bin/python sahara-venv/bin/sahara-api --help $ sahara-venv/bin/python sahara-venv/bin/sahara-engine --help .. Further, consider reading :doc:`../intro/overview` for general sahara concepts and :doc:`../user/plugins` for specific plugin features/requirements. sahara-12.0.0/doc/source/install/dashboard-guide.rst0000664000175000017500000000531413656752032022356 0ustar zuulzuul00000000000000Sahara Dashboard Configuration Guide ==================================== After installing the Sahara dashboard, there are a few extra configurations that can be made. Dashboard configurations are applied through Horizon's local_settings.py file. The sample configuration file is available `from the Horizon repository. `_ 1. Networking ------------- Depending on the Networking backend (Neutron) used in the cloud, Sahara panels will determine automatically which input fields should be displayed. If you wish to disable floating IP options during node group template creation, add the following parameter: Example: .. sourcecode:: python SAHARA_FLOATING_IP_DISABLED = True .. 2. Different endpoint --------------------- Sahara UI panels normally use ``data-processing`` endpoint from Keystone to talk to Sahara service. In some cases it may be useful to switch to another endpoint, for example use locally installed Sahara instead of the one on the OpenStack controller. To switch the UI to another endpoint the endpoint should be registered in the first place. Local endpoint example: .. sourcecode:: console $ openstack service create --name sahara_local --description \ "Sahara Data Processing (local installation)" \ data_processing_local $ openstack endpoint create --region RegionOne \ data_processing_local public http://127.0.0.1:8386/v1.1/%\(project_id\)s $ openstack endpoint create --region RegionOne \ data_processing_local internal http://127.0.0.1:8386/v1.1/%\(project_id\)s $ openstack endpoint create --region RegionOne \ data_processing_local admin http://127.0.0.1:8386/v1.1/%\(project_id\)s .. Then the endpoint name should be changed in ``sahara.py`` under the module of `sahara-dashboard/sahara_dashboard/api/sahara.py `__. .. sourcecode:: python # "type" of Sahara service registered in keystone SAHARA_SERVICE = 'data_processing_local' 3. Hiding health check info --------------------------- Sahara UI panels normally contain some information about cluster health. If the relevant functionality has been disabled in the Sahara service, then operators may prefer to not have any references to health at all in the UI, since there would not be any usable health information in that case. The visibility of health check info can be toggled via the ``SAHARA_VERIFICATION_DISABLED`` parameter, whose default value is False, meaning that the health check info will be visible. Example: .. sourcecode:: python SAHARA_VERIFICATION_DISABLED = True .. sahara-12.0.0/doc/source/install/index.rst0000664000175000017500000000020113656752032020431 0ustar zuulzuul00000000000000================== Installation Guide ================== .. toctree:: :maxdepth: 2 installation-guide dashboard-guide sahara-12.0.0/doc/source/images/0000775000175000017500000000000013656752227016404 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/images/openstack-interop.png0000664000175000017500000011060313656752032022552 0ustar zuulzuul00000000000000PNG  IHDR_wIDATxtOrC4jQtd8RER~J g9 L(Ͷ8`5+EVlQ-Ez̔NJK 3߽o>/{s{>yHrsīJV}&٫dly  mXQ:к/Y1Y[/YGdݙ.{[ZdT폴6#Y3G!s=8:4ЃD"8ka]re^̸P~?hL>̏7}O8n6p Ldz6?}gZ4n$uWO6N.DCYD (66~D;RﰆKk9qL^n jcޡͿw|ឞA{o^'+eW4FCݳsncտU l3s@|lؓ3u6.GVA@Ke'kQBohzrͱ DE7>9) |ք.9HM'/mlasm='+**6@7ު OZu^Tߵ䠍M$o44daӽu(;0{[;KQm;S766Q\Pa/'%@~ILòGWqRbiZq!Yռ-,sBicKV=BpjinkcX]uu[wqR+Edm5[&*2ƍWț]VV>gͼ5 @jkt!xOxocˏR*-t {^B0#k  J=Tw A.RCpyw:I[fM]%Tmݳi$k"o6NWz}ؓ$x0|ߎ`J;\ X@mҕ Fa âG6@~k{+5m,iOL1c°YUHkJ,BeZo_k5o% S5槴@G|qRؾ{9@Xj s mƌ*m,5؅IѣwM ݌67pPtFϢXm,Zj(Vo)b/ PY(7_-Yykm옪1LKekQqPh<>P5Lj9iT6ꍫ>ЀU=z%m,ڹA a7nWyw0pAHT|`PYX\ Tv5u4o-o1<>Hoʁr&k4hclSBW?Ȼ %Ӈ(PٮG<0sPmlw/m,Z aZU5 lb}~p@Ae4\9Pjƌ9봱pI$F)6WHTևo`.q[yˡQF6Q6'WOT}p{yHPVv$C1YzoςcOr'w~ܧ>ז:=i@W3[@AW|-l׶-x .nGhl\=x{O0ΰi~cwKM\ʖs3jT}h"_?Ϛpve =3xi9ݓQ ^_^wwJE)ml6KF<ֿgW8ۘ5\O@6YmεL/]'O?H#H Q-\fՆ>Sk/~z(6X<6YmbV0χcNնٿ\hzW)L =.S%g^6mlqe=^F\âҬ߶W=åU֜N2)Rs )[7~B…gwA8?h-ͽg>~ ..3?뎯44vݦ?tO`=~Bϑ鱹Ǧ]s:~-|')om{_׆ߏd]fΪ;ZZ2N8k؅_}л~s]NAURW__r=AQ <27^ e[ S.Z\s$OTLg:Su@v=g. ~H;s-׋W^Z3o?W}RT{wˠjF{n5W'su}k](Ka *i_Hm>,oł.M]2wTQ}W}_c/tgb9_ vD{Ct U9=>=vf=h].ڛ>faV5 آ-oj]@tPqm2S@wR`.j,螓j@L[ujxVlu˥#SzTmt (p͈ ׸1(d=h:0²Kfv~-}iuRN[ڵsMST/v+Hi_(! PEP4޼h~zftyI-KJ۸R{L@ݶ_= ͧ5-Fwj"YjՎ+,s;CN=a/`R*9bPn?|NDƛ wChR2v+wuR| &gtop5e=K߱ND?BGW܂a:yN &3LЇg6H*7LΟK.x:hpAڟgbūP3wٸթzdTuydu:`V)ml[00݊:W+8jZ qr}K'l:;[Y?sM._e}ű\(uáRS W\v/l催44 \!u?5.'aJe-n jxn+~T٠#2 R cuWKB n*8kgxz`{Ǧtjj@]` t@RvsYJYdx:8s[BIzXlN=nw" tQmlۂ΅~vS{jĎkg jk7'8uyck P/rSdy|NJY.+wp=-«[ŕ ۸`? ϙ Ջ`w)YxۡTU|ocY:iܖu~D;Q6գm[-rҾMeu$4?Iu[S#^agҁ7նvtHz6tm-vq__:Gz>5 +6$Jm&jMX_jku\'jwb*^ f}h{Z JUBbGiɪ-@zUeeeOP0f778nhr %ko9Z[^^Rlc)}"xԇgjNysrքg2ʄJٶ}][ LLѸ4g,`*{: n(6Rlc-]TZ NM9ȠT_vQ{pƖ6ji(amd+#[hP*mz>OmR٩gW¯Vo-Ѕ*pP_9Q8n RY-"^[{& _eڣK.cnvwn10Eɚ[6}&-^ߏߟ5&c{*jk=SJ=NfI^@ث.-^_N.Pxzz**++uWŒ8)O6$9@KeW %@SULz`Y]ࠧdhc];v&6-8H%Ol_ܑ*^@~-͝{nM$^TLvsq_w̢߯W=lX*<φYym.wElC;{m[vs:1LJƞyF1ܮp٥W+Ր{5 Q8@]"M#P?jh7 (6ðMicG z/hh_t|$V5e{j e]{Q OЉywJRVVAɚKړgZBGC0N$wiy[;At=|=Ey · S1sfz M.6v׎ֽ.`ۃk7#ӽB/IcisU7~B6 ]+a}VU=AW=F{@俫jسmO&5tmLw/LÜ^'&&kg=`;)W}-\Ǟ ֬lypQ>4tYywm3bTyy'lROM=zhcmoh^ 6N&ڰ6+rR `L/hA-t2k=uˬ0ƕ{*Z]W>mchO dw~/NO M=K٩ĉDH -//~7ݫr(dO#&6 5 OV=pR@m(Y_KVWWw'qncB tv0O U.S@J{pvO [ 4G|. W //^Wbn=6]A6_Yd?[#_`\:0xSm] [eoCߟGp}۝݆Ȼnco{\mٰ+L3 3ņݨȇ6En︡#!yxz~vx}Nc_saגsp9lVoC沮4C#7ܮ;ם{~s=sklBB O6 `Ȃ:ۓrÌ;)יyjvJ^l2끝jz9 3]xl`24ן:ixOJwn: nӐk\ Z0: .fFRڞ&DUg(bn_/Om`b^"~gIee L.zA؀<沓tr jLpæbQQF+[U {݇#!û7CEnSϡ950@6|am # ^>DPX3,M¸nf(lLHys6s-l L}˺ww}[tKOMDrٰ 4Ek ٠m1'`C.;éśu)`WPfΛ56s꥝czWIck1F5ױ>޺ nw}l&v^ߴcs͎ٔyl`X @I]LMo]gu?/rS7߫6XhcHaxlESM{q[hcyKk)t@.T&YY(FkS}\a>w:1m@=mv(lXTH]v{{M zݿ#Ð M](v)rzܻگٯ\mɬsٰ0 Xm5GCp6'ap=y)Y=Knfͼ5eT\|ycǞnC&1@0Iq 3ck߆먎H^j:' Ef[Ltڀi׵1rnwqew@}1M/MNmawMEy M]`QC, =*XjCs[^7v,~_Һ7gWn)Wu' 4!]O6ܹ\'Gzxn.j/u m^pNk3 - S\=o7./?'ܐgu|*/)++ۑBz]']*z`ۖY .M~'x}VO BZ,0!66,p# ~2{d\si~\v_䲓Ӵ;qKL)J 8z螠UGsh~;O("3 @ {ّڜ7\den\;_^ceٟd3ݸ."{^v&ϣaOÅo]b ֎?0as~ 1iKqlzptre=xz@b~l1FCql5(c.;,\h KLfsϏ I:nsn#{mp7GMxjWDCo"Q.(u_ ^wK2W~` T9=G쇑>,'CD!z!m@i=HʡmM˸i(\Svwڃ&Zm'QvuIr6ʵ ם^V=`N\ZVWcvq@5'^/q8]k/VLӓbEc+"'{ @(r:ّ ?5Ū4%O܁%~o̘~Qa77Poa!G"{{"Uom el0P|ƶQnJ>+]Ϧ S龷ٻތ>{tËdDpd୦wH].rMf x(Z{_J[k3U Õ~ևL^ &{=y`zٻEHwUs98ڣi~^g/LQ^h ۖ}h/mfi96L'띓"V؟0G\V{M潥r{bC:{V=7:״JRpp\4m?:uya08}yh>ڴA_G`~$"2fÃ0m, cucydNonJ{|naÖJl°^7̾[)u>#Z 0X0|jl|5WÜ}OR) FtAX\5oI&~P%`R;M.F&j[hc t0<s ߖEG+H6DqC1lN\kExm8r m5`lg?6@N3EˢV4u/뢵۱†Z+s`z;u+`l@ 謤 N9-|Be+`PLpJ0fZ XoVk:& ٰ 4<`S}q^%Raڎ܏to 6zN0oCԶ- ?y 88(4] y t|TűX:aRhc -xQ3?F-- j_CgXyO1ʴ0:F @ -PSYYɂWԀuocƸ88W:G{T}Mhc.5<^3 G ZBpo+pp@a`#(|)5vG rBքC=Gez`>f  o. jm0rtkVm QY'{A@i6j864v_t#Q<xo7)4V0̚t0W{RVYPYYzxہ  jhfͼF k]uua @ 't>LA/uor-S^mn=φEͼu<v @ `U'H@Np& a.=G.>X5:EJ.F{{v>U },Կ-ZߣICYX[)Ǖ6գw}3ex]qM]i֍Pn #$-os8I5Y[é{RG9n]L9  WZt wՕו\VOoeՓ<)ǭ^z}Bn1dM0FdcW.aꡍRPݵ`]^mwO ozk"`6<#&YG\๷_?FRu"g=V?khiV"{>u͆U qazY Щ9ڊw]  w}5ZnS k-t'{.WXҺ7:k'@ g zE 1T0jTVDqu&<0?wWשmDS+4U}HW)k+ v5[рw o~mraTsbK{TE.0qv @ *ux+STKý>r @ 4[QT] hcR]YYهjpsQ)j͵ o5o{q3m,ZaRʨ4w)W?`U$kZV& |&ka(PuҁMYʨn/66*H:SjN^o7OwůTiWk`Ƭ[{AGF<^%kA 5VC!7:i;G/u a(,u ^}o]?;LQ)k۞_>`9i A`97̽`*mۦh.QN=Çx`G V~;Aυ` / v!T4: yywSoAJOnR^ ~̘/y@P0З\|9 cQoίNmRpS_7zb͆`I'mD@P`&&]`֎^3݅% GZK`6t}?8a[qRuw = Lc<cO :~@XbkSkN/ zc7ҽԨ oi,\NVh+\v}LXm[n/Ο `~ 4}-xoE^>k|] U׶@@Fko\Ӕ}&{[lnm~}NlN{˭OWj f׌+ӿsb*(O5Tl["56gM8+ |*H'U;pFXi^{FU=F{gǟM'?S5fn+uԾ's`DE0N*R~oCxϾUB5urI1oNm@XmT "dx;a66EJoͺ ReCBD"F2`N O{ye^M74+wcj*^&5z/$Ği"YbFǞ#@xw!5Y Z8zrQʖ=)w': }l-A2n GMaѶΆ#;Uwy`7N&XόYw?!ܸ͆aS$]|v[bƕ¨HUX] Smyݰ=eN;Rz_ZFkzSOH++Gľ`B}Dg׸ERta07ݽfXX/ Q V,){2ecl{Fz: e/a/7^iu.߉6ngY:ץ?qW{&^gSK~uyyoM|oj1@fӷ 뢎ž V'*zbrJDy_mC6~{3|>n#AY'7 z?F_mN/'ͷ~?R${.N[s  \(]Ughn{*R` Z!Fh4†dM$̸7٧Yl{>ܫiCq"ؖw}_@Z M+D5mnf&msqFב[3/WA%O;SDzSǬ?;O 0!Ɇߍ:EpKobnm4;le ɟFή?ߵ+ܫrle2YDžhlk:zI3 iwb?8: z 9ZC]jr|Է4;Þp-Ѿ/'G /\qՕׅ1hpG~qO_9ڃ撹WtаHD! JoF弮vk{fxc*R;8U1#yN& ;cޔǣ[Dvzh>PKgk(z0lɩɼчfRCLBɽ|93E;{=˹hOnyD)ZM!Ê4xo$ qښs=!Їlv2]مw`X7{7:躖L\n>3ųMH.^73)f#_ڋe=lO^ VP^ osu->E}\X{IN4P^)'Ex?,zI=Yp ^nzbpm>ӻX;]ŝTon;t1ef۟G{F>GEB  Z[Eca N~hWn ږ'Ȋsok{:KuмK.<;3ƆёXj= ^F^`*[m? .{o:?59t ZAx ?>aMdI6X=(!?VpFMSиCwx0EPF>ƝYl{Å}Ae/kvm}5fUC\XaZ㪧^{vu6Wsx"g :dϐZJ,+Lp>ok%K(029l' 4ܵLڼ`Y 3 `gDABCɆBΖX՛"5zXD\zu _aH+Krx'E~;4`"`?щ}dy6`Yd-0= `̰҄N?+jjĺRh P+XTjT1s+>/WWnNHXonڃ|pɟ* ٬{z׿#Ecj2׷wбwHo0aQoϲu؆f C/k.a~l]=^Bȣu!-@KsCo0q+ejf[f}gMol{:e\ͱ`19r=]YAM7 ޣZh@rtUt_!E2WN[:]M4;FZY# (@-S2hڞik1y~u2$] XSoσ?m{~sCՕ[ҥMƎ=C n.u˅H5ݻ|g_)]?^z38=!`EZh Zr\N=ӽW>[F;5:$viq3];3`@tۜ8q=OMD:ݛHB^gp^2 ] sB.Gt"Qk-_ym4dn͒N .KŮ~D[8/zӽJOxnM !|eݴ́jȇ;V^Q_>e=ba,ܯL {te:"jM M9hg>{簬=W]y]8GmiC@qԫni6ާ-J:BcR2S;Ҝ`cB|PE݂SUCj'0j%6EJoͺ ޏL4s+wy|&ׅȆ2~Ab*,m2jDNDs}׶@hXnuh (WvwNg;͉C5\yצӅ)wa>kL~>Su\ni0uf5yR5}9c;j}z\]b\[JK~3.U"l=×Fgpj>}{qٳ/b nr/ ZOtM>bCq m^;XozWЏlFiAwXo7x n䎂"C67p~ֶ56x~;XTؿsjnyTw{J!d{䄿 4NxNʺ|{ps ﯾ^yjmfr*ݏ<~q_'Q$vͅRg Cl،Z #'kzz#Lt~oA\jVG@!Ѡ>bZW'0PC2ֽѳEǤ}fvCL ]8QssoG>+x7>qEMOg{VKVH{k'7]00ۿ5~_Aۿ~\hS=8b ) izk6 2ω M)Ӷeà]oB~F{>F (UB5|0l KC6`H<5yaBzWsUTף^[\zpĚ ?078\\Jt#" L^^A`hbAq4/5}AuPpTs8lg*Q sԠKa!Z`zٺ=uF|7gӻhj!r#U / !Ll 6A`Yn?&a̛b. x{#ss 35q6tk^ _t=g[ސ,F,/Z h^ź-vc0E 䘟~¤hdͳ36{718!myՠz,kVm QCXB0E pe.5:`mkuіB Z$K5FX40 @g"P4P h+c!GW_Ě Zy I?Mv4Cl>s1 #hmO4yw)ȂʪuŊ9]ra/wa޺Z=yuFX2tbR3]zg{9.GWrh- uV_ڴIZ : B-2fѧ^Vpk+*)w~ms].nAui+U3^1|>u(Pk=/鞇hs?lO/AV}?ޞuR=Vp_wA[=ў}ݎzudd%5YH&U0¦`.NAZsV`lch'>4K]ϷJ=z&V6vKYO?c|қ',e2-ҖL:eOvt" }jWaXG u'l>?qCS`?p5{5_Yi;8vڼ+ضewr(yP}`U.4ľVՃBým]z*U#/ }qSҽ=~~nʭhǢ7WuD{c]ݮz'܅l}_h>vtR2Z(5֒W@CB NwJs cǞ;wvqV+N~|m [ VßY>iE#Gh$^Vxjo&ò#~W!vTX0sX=~sXÛ]Nu5gkfX!WuQ^wtc8wE{uńci° nŒ _'v(t4w>Lx. ^ZFvd^knսKzvsuP[׫cunvVҶnw4ݝ2`I VsVC~R`MMì[]G9n)¶u n2WƭYrOqF}JuVOw O0Ekmӿ j59n(t4kȮzgÛh3l7vw>SOnGu(HG*]lN'PBLvG-<FdZ7y`W Z:WX2WBU[ج`.ktVN3=5hi 5FZV /07{} ~XV5DZR'݆>kb+ 7 f|+ʹhDqasђžUzP[5Z=}e0`[o*0js_` [TO_pf?ا sw;YyB S:/9qCPh2.:9泺|׽};0`x{Oe+GL՟|KxxpR3.\ןBVSVp֐bmQ`-ĥ!&2d[{l-jVvtV}sH]C (tS)P3{._1~GYßDGz^Yd]P\è@(:ty(Xs}eRQ w00ƍ;Gzzꅒ Ph2 m q=+6k>n~]Yץ9V~ee=74XϗSN{wP:)Qlتn5_WY* Ws ~GXGU~PO7] mUM=b{$4+Eeܜi۞6 ;_5zZgEI|X?j%ƅAv=[JATy >itC=ijUp|\wuEuB`}RYa_º4kxw9p ݎ{:q@F\U<nTCpܕ[l옱?-BqV\9ҷ9XMcuQCg]VRd;H5TZ7 _ ]*፮ nm_sCY/= a5@Fc7%0G}wP4t=xZQtRO"3C|F<i[a[UPR0 U ba58ThSOkBzO4A_'l?!T[ (NM^y: o<ܷz Z,E(8mzkVĢSNKCm[ (N}ӑ4y٭"@ݖ'ZnL@a5j;z_rwPGe\;gseeZM@qv7n<U/G,klh+]N߸@o6KYZDnS`0JQ*Ű705ɒ+\m~th=ia|nv) G|b,ɫ1.E&T?Cpt]w|G[j QF Pdvڃ];fj !JY_u(ODnwTӑw:M^3JjNsp -,^=‰"}, M"xdɓ=*e[(NMI<9ZplNZ 7^ v}=y(aQ>B*@qj3tLwSҞ7ރVs7S"=^ ,x}d(Z6 ?cDTj|h.z037}0VNTtUc9@?'[m T'}P1Ҡ6FF(AA:8 $i=`$f W2[QC,5FMN I^;7YezHo7EUUZzNծYy MX/=)=O۹P *YZbuz+ޚ\"{m<-ԥI羰J5=\+KˍK@`Q/u[s⃼)S|ս rµ~~!Kcpd{B$:~':ѵU tЃ]7tء|4xhhbHACAFl1C?6|y$]mhz8 ϴ_'uVUjKw?"^|'AO ˊ!ѱ!эY^}T}tϲз];! xaᣎNsLzIgg?2Wgu?Dwu+WU ?m8艤s8ud< _L^Hc#,?CmĔ3ϧw~M*)cӯzXhl~g|\Oy"vt y!tOv g}4={|-]|Ǔ_YxY7J|7Vk6:[{.'8|n:Smnkk*7ؾg6Y u^-$Do=ߦnj?vӖ$]QtDcCںVh'g]ңݏ[CmwW~z~49}] &7Oe`%CÛF v9e}u}cgϢU;SzQ / t1:9kӿȺӻ ŕrR(Z[(X b,1,/.y;r{ʟ9 ԕûb]pi^vV>KdCcq$shg&}78^Z ;FX аijDlcU|^9~$EzW̢c;&vC.bA?۔wO].;=ce;-ҹ/㕺.Փc`%C3]15+ƖH>=_:)?+z lhtַx)|tРAw6tQqf}IbU1c7κB% ~;m b~LOo`%CÚU"j]]qѣܾhl@ڻ[} v?3KҹzЍfTb(Esf]cda{p̗.m83zb8#CAV WnAjz(uqwSNRpq~<8p>47} =S?Px,B/ _/lgn4go-5z{g)/s'+9=/?39sC7MMۇW&-nc ŋnM]Qhk}2[8ʼn!8S9,+iNoTE|ڴKv wiiiEi< ř`kg94!b7x~3[;ӳm'o[Qi '_R179=yq8wk  YŜɞX/=s0uԸcqNԶ;ibQZ۶q|aäw8ApۭwfzRVq@꺖l;b1fݶn ĸx.v4x~%-(4f]XF6{֕闖ӴX9NDU^o'zgU-/ >`qu5},s1g{=ԋKNbH u.zN֎+>* !iYڧn{zސuy)tzΨRv&]pK/J[[ۊ L_a+4u}3kl3?'ULתac?>u3fآ"4cޥ]s}fƌ>*sTRs ^qBؐf#1 b˪#ƌ '[ ~ ^ȷů~\[Fg&#Gܐte޼7!+cc~q{j%0{b3_V~^^ϷM*?x[I4J׷;Eu\ҹڴ'".QuE~ 1bBW;N׷I룋N;ڱB0$gيg;cmTOVdy߉'sO&94C`?x,H`smk>6bGg$ HotΉBN=SۉXF P&V_Y/b+-,_D "ija^hU7~:: ~j'\^pÆ^0.ɇCX:-ԁ_>'(=EiӣTa/Ƌbt"|ű;pAרXkk< =8 }<5?XMkL鿿jn:hPGoCTif+= jO C?#(X8huNϟLpRY}=/N:t`z)ϹQ,42]_.dۍ rvÞ֊~X^kÈ^6R$h]lqd/N l^ݠRZ\:c vÉ>2Rxn(74]_=|h|;^ cWNO}|M9>]_=/'y NvI78WlCFl_τ8u@ϡCo^+t}B)̆>/kꖺ{pg9]_ ǟt঺&pZFa=B`t/;T _FL I`?V5' lGjoH@ONjgt5s7:yȡoTOtz/Z/jAMhxyA&K4MSx~a{M Ԗֶ} XѺWvOeNྫVt@t}zxa4Ts^ӹ7|9U|k^G~t>vt?m.]LzG?ͶDBMӹ?k֞d|;vn|]v\\![G@(.+%7.Ч9`'=,[9 XJ!5G{bvӟңbN<# _ݝn?Ϯxb}2dIw"put˗1Թt[XʷfgN_boÙcs\.JE0U5{>r<@cSxē*V<ūZױxU /**.a#V~^׻flu["5{Q?@Wjӿha4?ݍ1gXWϕdcNoRss_>p1L=/@?b Gk 6or  ]-.bj\/i.ٱ'p Ibp >ou:EǹYpq" NʶBLa,BmtvϘ~Y~c:9:hcE-].u\.Wn7o' 9h`Y|Ęۛm>ERv4(]_~x.U;+9/@Y/֧}zz7/@-i wl#+)<@c+;gTwV,?p t}\,L5VT<g x1m~b^N~˱qa : `&?r9ti/ꔮ/@/BxC9H dCsLz?>}z<~uHP?ҿzګ㓦S|Xؔ?4x?m%R{U.ܙ04Iz;ο|v[W\_ [*u: @fzҹkzt՞o uB]\Z`áUu/5Ől8q'aa,)ݚN2%?/־"6i|gIg^}phii @`Lk1c/]#6YŇӃV5Q`ޮ+7>7b.<7/bf^78imk^PlZ|Ǔᣎ.8 ڶ9 -@W.(9/=jG7#v!@y}~wc['sD7M'V Rt}v}g䦬{|7/}֞-$[odwء)o) @? @x3$X-G,l秃RtOc9]_Q%SJ aKG+Q,O[9]_;xA/ui~Nl@ˋ[rEגγ힪sԍXsNcF?B =)Co˂/uB6R7mn^m/n>3zZ[ ]_"\i'{Q +v5CFzRyPt}h7ɷ7zg깳袻PЭ:~Mfzu q|Uҹa]_F{rusNgcq'm1UzA;Kbt }"?C+7ڪҷt-_K/;rѿ Q3+@*qx^yFxKu0c1O +~ghٍy>laaP6;^n甤 J=t.Kh1G1M@ay19:qtӍwҏ?t"yȏ$; DPIVZt~k7!3Cm9j쿍?_6߿] oumё<#7R3\ul#XG:|Y{dו< @W@tΓ]T{{>6*=wqI@?24NynZԨED]_/nc?)t}hh4<]_/ O @t}hh4<]_/ O @t}hh4<]_/ O @t}hh4<]_ZNkwUjr7oDO_ڝ7tmiи\_n tZ6ָJJ-/S+'+uK&t;]Q^U5~%I>Qݯ?xRެԖPɻT*OckPtz_ϿZ%u5{mkSpWKgU>󟽝_<<qo P[#u}{L3a4D*:mYu Km:ߏ]VGSl(ͅ}$5p,5)U!͠؃F +44]Оt}hh{65o:w6Wj?@7+ȟt}߂dsuoo;#y]kæԔ9Rw]ˍI|%Ie***uh6zpJmozRVJmԖJM*]U~MyP]_fk)D'u~~K[Xos~/_GhnN׷pRX- N>_OͿ>&IDAT0]!yC|stk?zn@݋aљZ+aBAv ~,vg_']?wӲs'NJwأt}Gt~{]&ZXyrp!q7~{XtW_5/ 5BJ-K:_kɮ'&~ޘ11/7~i=Kmj񥟤kN$;w1'Yb(J<n nj?6 {1O7 7c]ߗ)>=.8@`N,!C-^tk'^rtצ]=L5N.DnoIqI7X*Ve!lBbwiIgqG`+*ӱ9r^9\ ޒt bǝP^AJT-Vu%[<} ϼ*dQ[s5hr-\Xl) Y_"6lx1G8VShvΎUc;W*DW_5/mmm-p<Σ<ftV7MO~hR*5mZpil`łH;Vxmzd1G>N:Lh81sSۜ񥟤_xYye LL:MJi[k[wjw.z3)'n__Ӿ:Cp pԤsgz͆ ~\__X:E~^ф$87!_'=ty^ ԑIU {~ބp!D.f ^<áK c-v(݊0#G'} vj/.o4Wֶ'Щ믽upt$y\/ȩ3 G0@<cǧsANXE (8-hPh[#]Tf঺u(?<u}mdKKKR}o7խuۭw]M5-rTt ;wypJ{KK˯sUE]r@_}1TƗ~nC}> Cs5գu%W8CR{X0A?t+c J8)җ۴e_xSjR{nԟàwmhKK;cj#,tU7/=' CHY<cR{=zՔ$N+Bl{꟯#|YX{:h>0#i"vwy\bpNtcuZzwy_Q5eЮ.׏c(w#t'847H̗`O|6 ~^;: ]<,Pa-'>qG|ώ0븏3λhuޯ<-\r1X7BX~@oZa$`udqOZC#x~eY6E8rX#GKWɾ_.^01 p#DodtP]s伬}Of߻"gmO %INl[j M[]j8Bgtp#xVV׍[ōǜbq|/_.9' 0|I-*/U x1a{Vѱ-GcXuݍ[+՝.x NsgP?N9?75ƿ[^t*mQsv8a3"{ su]b(t9ǪԵ.]V#cXM>ſkO^~b8/z@8pjzXXd)j17!6:jb$fcxrb"m.oa6n7:q{QXV\&`G:.WGp֑eIޅl}fs/ȶ,WF8V"aq/-'WW=[_Wtv1^|lTE/*sI;;w!%k ӓ(zb}s]׽}~C8XH:Z[[$6bV}_=FЗV$ ϬU} KZ[[żZ'+u f:n@joo,BJ,X:i@05WK7U\}^{{7|YMh*ڣ]0?vnSSTZ~_+/吃YKu-"r?n|CSTOwC0 G27zWӣǎ/@#|kK?F J u<̤+nOS]V qEE}R>@Y=Ҳ{=U[ߖT;aԫI ^VO߳.@[PO|Ÿ:fa϶;w"\>{N?*R@w|&09} MTqc5gsrb+MD:a=w[RѵWjY1$z[];;ו^E09_]@C?qRCyjh*B^&λwoI:;MozkkI>7KWdAJzd]-v1R)EՂeT _?svߖMأK+a*:VCW~N|d{iV/UAw[}U]LԺ"l}E /#+ޱȑ|zJ_]j5g?˲톢ƌ>*sTokت0R3+01xC^M:;{ Xq9+uiҹUuuW W;[EIյ(yQW׋۰0@wqXIENDB`sahara-12.0.0/doc/source/images/hadoop-cluster-example.jpg0000664000175000017500000011235013656752032023464 0ustar zuulzuul00000000000000JFIFHHC       C hX" c !1"AQ2Waq#3BRTVX$8Ubu 47tv%&(56DFScrsCGfw*!1AQRa"2B#3b ?S)@)@)@)@)@)@)@)@)@)@)@)@)@)@)@C3_],ըb)oS.>"m0ı{fkVu'|%v=<[!.MT7 ;U߉zT=%Ի.{ߏ D{ ~Zkf9SF-[:- y% *^4~Vnt>\D6@%H$CKi1F5$q֘M>ضz *붑ic7;5!:IrRHIH#W;* j}h߸6b9u ('4Na]ֻ]56hKx!Gto){}.0v_3L,rno>T*#젛R*m0F59YդbWw/F*n_ݾZuKlCgVxͰL41()^͂砛R8ivwexvwi,.\%Ex-+%gꤟWZSܜΟg|8YBUT ]#k vk;jyذ$kMx.s%~Fۏ2k k~xPOPƐ76^FQ'%b7-:\j-ݐ-7n~@KStO܆hRY&]ذgfe4f4 qJBnE}si2疋hS5>9A8Brg&lri^Md[`wOz\KL-Hvin-ǶˈJW4jT%g8">>wi^(ڊwFJU/Ki~Qܳ{:1G?NS1$)\-ڕu7c=-y 8B]KNpVҭR)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J f2A3m~\|qP:|] |GK/GemmP1# x?Շ9xSw5 j `}ʻ4ϣ}1yQm[W`$O*҃l!mJ[+ΰͺnFgC}a@Q$ xO6¶Vt񿫚y)q#rkZy56%-%Ǜ1E\7Z"w_Dz%eIx-I^$/rTmB-vރݮ[Zv+KB>jT@򯜘"1Fq)WY^,_oSI뢕K1}2X;OW/Kw)NtRW/Kw)b4)T+%exbd~wBOX]_ |1}2X;M'y]K0DZ`d嫄h}^'zRT@QK)J$)@#؋XVjŻ{7D/%:7RT8OiAK$T-X{Ufŝ}\y$1gT笠yn|{;H ̯W =!ZHb9i*+Hˁ=M@'qkOӗ26^E T 2]S<@cgjwtr1W.#@誑 j~~o"ӼADԋ4+n_P;x͆Q@!E5x>ҙZnGf{ƚkOKv-ÖҐ: =mګ3DfZ!p]L&يdg0] 0;{;vw l-uJs0uSq /mmJi֑dj : l7KzhCK QqKSьG@tݥ3luOy.Tu|YgnUEg52w97R֔z<.Go*BV9) @;8l:X5[vn0 p6BXn{A$)AsbC$ۗoz_7o}O*Ykfao t^Z~t+%-)i[wJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJR2|LksXi+]7N66cߗ/m@P^9~خm=n"[}DVicSMstFj&^w7P88L+WoګMu!FdĮDf)3 7bxu>?c-x%)}'̓>ki豏lӼeμ$I%t!%\[Ovi Vޑ mfgRl~̈.M~2VH^ ⭉;yVTJz+buVae)R `T|yb랣%[{_00{Čz}im#5)%r#n,~;jYۮg5m`2&ԦEme+x#Q!\$Dt|/|M.XMHm7(!sX}MaCt Ԣ/c56ȏ!ӭqZBRT;A 1/9f[ג`+m.\}Sn=/UQ\ H\ǢGE6| +O7K1ТIW _w5._N\= BN!-XZ9;Es!j,lDiRV~UL,ZnY7cS )PBRxPR'%|#ҷ7fXPW1U/oCg&dnk1&rӪ6Qe>uqJ@$lkT[P2hZ_wa%iڦmYYk#1̳tIC> >iP(G ydWYWG6ٲ>K!؁՞[`8Y'rMH*#C_Fcʯ>H;= JRE)J)J\I]xs֢1p (& -.)Wb$ 'N70 I[kn8צ@&#"S.(v}^;W*!;tqw(,Xcn,[7Jp3`9{v"7Xr=kנ`&fJ:SW%)$uASC$JJ/μ!H c7i1rmz\hʍv Q \fntOq7:ٍۙESu 'ğTR`8u3է"EVIyj}2T&#-I$п[c{@tO-@ȗwfbr7D2LPlQ'UoS)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J`iELNJWJswqi!ZB\ZR> ;%YcխĜQvk##-~gF7uLq]@-"+#[rdOKqx L:^T1F0ϒGwUYhlEoHgoOHaV|=}Z?;OGw>"k&7H|3:||=}Z?;SG|i⑱}+>2Zh**z,~wN%__E[x=IeU%__E:k3Y?1-  Jpw˨Weuj?[xGdĞ {ޟ7e=EJRE)J)J)J)J)J)J N0k)-)bwnV6ReeIP#u5mN{Uω g[gM]_?2. ! Ka[B{$b2Fe'D¢ڨ/ëi֬7(HP#ȃ@RTBS"Ecy-q;(l+ncXވo$Xzudvmy//,VS!cr#.v$x՘h.}wHd\6Q78ſH[2S#tGix11^JV[H)@)@)@)@)@)A]ig2ߙ/a [w)u|ՋU֛dy8ԋnJèoIJ)q:AqRTuVc߶b)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)\Kbio> bqj?\$j os꩕emIh8(l)'USg{/A{H3 yA. pvHq}u3¿c;.z9(,8-%dlǦ\{nxGb= EƭbTq(ĕ<`2 RRUԎ;o)bѪɱMd]-?$٥&[DRa$!d'M!Ԍmu^K&t*GeiZ=du@Z;Oo*zut6ow~-IrG_M&6`vu{DgW _rsQŸT%m(%$$$cԽ |kLRQ-lmQ($V Ra޹d:[J[hTZkpw !-#DTztZK7>3$XbLJ*I-a^)9lQWj7tx6̓2>,e\VytkJ] ($8A#5}M=(sɖVm lwAQ%-|QW5<O86\R iVra-J $o'O hC2x~ۅVl:c,I>L9V Hl:uo/#sMRh6ގ. *fa xWld}Aò6A;G_LYtM1l˔].3an7O A#pFUʕ5wL4bf蒸NOS! J R|\ [&{c.}&bbO$'[A.DdYy표^JbE;)W4IX9bviAA$) <ȠR8[ܲc=],~^:$:n(殢;'XõwK `bB*<#_T 5IKtVGC6rPI$ "˲W0cQ],ܐ!uKN3Yj.?s1;6+m* uav'{êMu&6y-; ٌ\Nz+NJ6RTAV:kxX/XmK~SmjBJvJRR(6T:^%Y9RCD£iaĭJ B^>^;)'Ӈ?.1Ŷ7/w ʈˆm&{x !$+QHmQXlndޝQrr-6Hq$-\g6a[YTs _X~|{ sN P MiK)Z)mP+Va ěW*=5n<7kz!)=e/vCaj(<IFV;-Ci HA{=wv[nUº IKtuXvzu6ۇq.8rnUGHRJPth,.@KHOX ޶]qcemci+p~ݫJ+VmȱKYJ9ýb%otȁi (7zֲ,t ˤ_RѿPAzaLvcw7㌫u[z(^翟zp<ˬL?mn4ӽtwP쐢Nq #]ETóJ[ݖ#ÎT[ v*Il*EAaJ'ʴzBv!J<߷;-q`MޝrEڼ3eQʉ)J8=Ƕ`|Q1𵶊/B'e fߏoWAc\fj+a nB(6gıb1.e˹dDPHS!iXmt (y|Y5m`"" 'm-RH6ly][ ^#qDv=X-5 %i 6IʺڎX15ӅaQ/q$GzZnP/6{TeB$F}C,rJ}4eY-VwMbZ0YiЦ` ;V,v)kc)Ӫq(,ۛ@p* Źn)2khPI?Q إ.jK*,2±e;ZaYnt"b"͎fi}Q kuxmy0\"$ۅVYufrqaa!)RJIؓCn݆Գpj aG} nME<%KTtGwd8) ⴂ7I}AГ&&aȐuj 䴹qD!O<܏z҃2ipŔ!L2\%3!%Mq E)Zvm;yPY&kXZm7mŘwrJRiJo4^aVAN7e2u\TL;6='t#׀7T6*Y F͸k+]r-PV-(|ԝT֬=gM>khvOZ%oHOdGoxҫ^R/㸝)"2*R@;8IA;덃_t%\ENšv5˘-ccނťWK5o=hA2>K=Rm=ImM7F.VY4T?Jh|*wmJ_tLoqqq:cIuG̔%J z촴Obwu3ZO..>Ax l˞ZXm|RJ$++!uZ^o1܆G3ԇvB6ڞIRr) 6'rkfX]cԌ%r ] Sֻ^ u_kiJ8}qAnIZJR$w+OHO$=!xuV7IGUD 4*9ҫ|~)}lCVHn +]1Lqٟ㌭4]aIR[eƃB~jN{M峦ڜ1ճt6ŐICyT C~۩\ŷ+b%r )(p)O@=`}5ku͹Hb̀fE\E$)y}놝떝V2uCK VyN>h7c3Cq:ʐ댫ٜZe>ﺒ$ڻ[-u3 ϘJRq@o@;UM$w'0[r,b%gi(=o-zzso|mc)f{u4;[fi>U*imDߕ%fZ-mL_4ZrIFn1e!;#wZiMڂťWu(u<-iQȧiiIăW=ּM#[eۻK֙rʀ 'a|uO Nכ_f[JG}sAZy4jq?9#o/:Kּn#i+( ʂwJ[`56ۊT<2)RRvê ŵ̱{caqrd ˎxiq~N緟j * kfkW  +deq4Yx3v"xZ;86PGmKmV@w; !F-*4 ZdG$%J@*v ULnו\0Ȇ6ߡ4P#u젥sWWa-[L-ssQ:k[ATBY szm+=V>o蔸eYf*rGm}n{ Yiaѧw;ż]k- -jguW  !6Se+QH%ZRt=mV9LU'OBt.]Hy9f꯱y|3\.KGqi4!` T껨w>eŵ̱{caqrd ˎxiq~N緟j`٧BuBmhJ6q8C?7z *xӍOɲoˊeYCH@PI!ok-me G.엒߲͎-Kgm޷f%*m/=V1>o锸eYf*Zm>=mqKξ饇7F7D_LunP:h7<ݨ,JUwkB/\.\$4L{,ME iH{POtT(y۔wn )[/Pig@ u]-#Y5M<ɼ>հj5択ۇiM7%[)Uyo]p6S)}FLJiităSؖFRTۂ[k=Q[!ٷk%~qN+ @+k2{,mCh*xt!N[MS!V<~޹YiGd6O*{4j wUfїJdd9(ɯOuʛ)n>( KhBY_c(つ)|G%a̟` lu݉7-1XG~UϪ_mߵV~TTۿj%eK-^sbrXa-vG&֕S&1io ѸRG9uRv;6}Qcny5G5Yʜ[yjz:SiPW< ǿ F!*S 6}K˧˷/*px<^OkvUڮ(x_ع!| *huI _QW2|\΃jޅ6 ï~SҹoMܒIR}%%EJb|ZB7 o=\ım[.>bT7Sd`;vYYX P ߰5G+jZSҧPmGs޶Qh̬1r[ vex-HVwI}\N) (1>MViA)=³J qOS (0R jDGow(:[0#ǫ%qmԣ ̙[,.C(zBYmK .$Nv` i,g2GfE!Dy'YB֕RY Ubv7 m2K EP1YRQpAҸI UH  w?J qN) (86;Z=r̈zCjB)R9'q v>uW?eFG1n\l @GhTu+J!*{BHv) '揲A) qOViALjQȖ9~rLt"** wEaN( vT]BX*m,Zؽrܨ!Iۺ~ƬGN`{?gvnˎ{*{b\6ZZ鶴l}m؛qm_B]iҴ(n<W'%;UO,b`}ɉjDP@\W~[} $=sL+Ĭcɞ;ש;mS l=œS~j(:e]l)MHi.46)RP#tWaq7{sF u@.Aޘ?(# &]ͩr-M +hJY+* VvPg܊aĸ}e;ZOb:{ojݱ[drb5.㎥亮JFą;,wn8*96~ 4dl)ꒂ\mJqԥ|a#;u9.a*zHJRGm6|S{fN#+4>Um!tK.lK6eEjnf:#~ܔwcG P~\SS& dSdTgJZ&|_rn#z<=u6OF|?rnJZ&nW$yUҟɹ,})!jܛioy^['#VvO3句?-BsM>XQB7?v ObG3句?-BsMw,.Nxbxe_DXX'[}^yGIP$vc&&%sOUy5u˱r7zpM2L2m(7EX(QP&5f|R.9;x&ė%uRLuPۿmBV5)JWV/$&{&KFtmحV ?%nR{w;uX]ZmyZ^'9+JF=lbޕ?"j_)oU\nuQNl;Ԛ:M=I.O?ͷn'm-WMVǣL}%K IELT1cr1mKoO&ѕSz FyOga$-*H.6ݔ neŬ9lR#Go&DS8>nHu\ КT>1b/sc'ozEDfKkB;v,[b5oˉeľKJ (A]QiVwX=H;ÿ*o oX||;/_TR:J7e-An ǎ}l E(\KaHXHꎺG %FXvuGwhg91@%opu2Vͪ>)|;X1lV>T?m\=^ẑ *fGQ3.JO!nnn1O.Wz$/\گ0 Hww8-IQ@; <3JRJRJRy3Q [h!rv[ l߾ Wj<)]NK24lQǑJ'v;n>a?4})J)Ju▜/\sK-!B ۈQ*mH{=mt'?#]ʴ8"F5O9Ҳ/vœXQZ7l|Vqp"(-ɓ>#S@;vm#D8&=>uǘr_KS);78.T-9ZbyqYV;@%-*I +W5  ҔRRWq[AmP`A#=WޔXcwoy^𚝙&C+!)yK/y+6'5kpeq\aĹ {r..Zӈ)Y.-ġ!\U˶[qp[ .6=u*!#}si,ȼ,2{BDDyd0қ ,F+ *qE\x WabȣO&[)s.iHGäTWZۀA;yAnsj^Q6Z\hoi J|i$ڵSl~ fд?U[x#\ucfJ]k5ZeV@?tBףw}Ǖs Kmlvi.*yB]^.0OQ$߫;lvGTnOeJ/'-6l5soS-H`Cr;6[BB*; _5>YGЬIˈtrd[xAq_4Av'nWn릕ScIrk'2&xĭl$o?O,#W oi pLWN֧:%K@Gw췩T#>rɲn̏IN~RL~;pcRڋvBGwHI O$9!0r$L&S1RR 8\JX;{zv^uv9/+:e!KW֢7?Y5e5eɚ-)3:_ 9SwGfO5 1[}6;qܕQmߙdRt* ({[qmť)D\ggnkIoܿc)3W~jLx0yQǭ G#GS[%_g報j3m'`TgS+BNtF[mJ[I=$O#v|o)'f7SެF1-?Tv+i~IZѸL-{%`7$]<_&ӜݧofGH Jwj|2,:qyS "K!}6ЄJy;rBPn>m'bX7]!%|1>d.|/RC nopOr>ҹ95%e֨M}Qtn6!G))I[Rkݿ.O?Uo| 3q1ҤI4GQڭʪRi0O&/#=4_j dZ꯵aR R R RˎjuT8긅|JcYnA.UƢQPJʶ{IZ8F< )@)@)@)A|QZyoe˝3kaN>OGVΤZuRFG$R<Nƃl>a?4})J)JsPp <ŵG9XakEq;0V}owj1N 09>۩؍ŵ)^T݊F[At;O\.]ՙM6P:{I>]N'r mBȷu?d}Sd͛&浻/%8?|H@=okl/$̫fYqKBu;{%ďTLv%i,֝ACZB0EX},!Pn- BJF<}*(` iiz?Nݶm %HlGj!HZADPs)@)@)@)@)A_~ 6ͲAoեTΜd,s%Ϯ,Wdw;ݴK $*CoSkH<c|a˜͏ҭNK#=f2[ m=:RDdUdۿ-Uņ¨jU6or3cr:eN8 }F8O-V*|a˜͏ҫRz+xs$yWM&tƌ(c)ٖ6xieGrlޛGq-W\r<ːP]! InF_xc.q6?JvLz/֩y¥G. [ lJ-jeĔ8ېBP;F;n/7UqS&YVSVkZ6RQj0?fS\l~N7:2_TO<zlW N0T6 h9}|vi1j$+zRm*q{#Ēl/<sYT?8QD \?5?8Q=GּŏHgC2.DY-R&eRRhCd9-[!#r<խfg,iFf+s,͵a؝l:0R; `=м.~h]\BpK(; >oj 3߁ڇ!zjB|5 uxmGDU}k;\r-/r dۻRU[^%]rj%j! nNù2mzh=)m[Hs+((|m†\ummCrd7mcq!#rRհ 5["q23:* qЄ[emEJo-WRUyޭOL˖Yeu Z*GXo+Pl %{+4OQ;)f[[$!P:.6UJRJ{ )?eG| *oH'#~=_w?4}|YRRlPp ";K% dBʻSO.YVigtfcagʔˁ\Lka! X'}Ok5ɨ>A٪BW1嵻_S{w6ڬ:4yMe^ $.0L{߹tM>y ‡LJκd7M=kTE9ŭ”ȭ@w; SJ*1Ŧ{HY!HS*;æiImiJP)JP)JP)JPhr aeZ/<rP{X Z`6V5(@~p_B'=~ CEss_Ч@~p_BjP\G)s_Ы4W?'=~ |G*ƥ @~p_Bxi )ɸf)'mR$*WJk0h℁Pl.r,<ٱcܤ *.ST g1+R]`^Zc!yM *i;>oNJP*mɱI:~bzYl˟M#ןKL;ڧ5әc9rR+d~$o߽ZkAm,r\!R۲b<ܐQ\P)b;| Q >/n?祖0q[Os#B])aiO^ym-;|9mv~WWJW{߻sk~/ioݹߵӆ1yz2T"LY/h8OQNǰVfy#LBZdMn=PQB\V *[iBT>fw5GU'YʋŦ,A qqv)JP)JP)JPa^GHg*Ev':u.<˜ѳ GE*@[|QZyoe˝3kaN>OGV΂L>a?4})J)J PqmYd%^7:AA[oBqGXn@}UyPs: CO6YT'Q;(`v=s~#$̲&dÌߤ92OFPݽߋJ=a}Stϛd>{P,%IBCRe;{̚hwz 7j ڔ1qky IKgT8y~ i/z\nv6eCÑ0BuHROb 0h;qRҴ); r,>]ω<T˅JmH^!(K`wߐ>ʔHj ұޝ3Jzw+)J)J)J)J)J)J@t>sukziY R!}9v@9WTХlӻHm[vR mz~y+{W(j)n^|XNA+/ ';Tv[6@],VDf=qN87=FyJRJRJRJRJRJRJR̰x[ՑclG\w7^ UUeuVl켱C#juW9\C&혽o\d],:ꖰRSblwHU&CwHm&[n% zIl%EiےӾĊf?qoz|0\<_ݷ߿U_rYe[qU6ˆd+/iEtܝnNnI q˷ITw&nR];lw6C:TՊecr$u(JxR7IQw<&H"Z[^"bۭ+9%K { G&eDpiUHdg%VnkD6O/](RTPP;'mGyZn'jlZ 5$&n&9WM#Jy3M>D۽ $ekR"XSAzBx}Sa7!dwvQ&.u a8PP;`V w{MNQ'hЭȕ >_SQ.%C _dǧPߛޤ 9EuˬļԧmI5LmnFL%:GWTTd,g>zӹ} ۳"ۃHJ (+$ JB- X*x^#_f6]_C.hZؐ CIu)$6%<'sLkcoS,1Ev[߉IHPZRT6PR2A6 V0"e Fmo4rsovm\I!vL\($A]%H% {}U 'ܼDUX9>mS킔Ubڵ. /ś\봤+ҌI+~!y'rmOQ;)f[[$!P:.6UJRJ{ В揲XOefJRJRs S46ԥX͸ SLsm,Q[n{U!Uޠ&=kZc4fM:+Shf@JFʬ7>mCGL-9Zrb=9Yy~SY5Ŵ‹adIJX)(vcHSSiKq]I4O.s3e8<c :G%J.U_.a»*(vS{DJ-59 ^L#F&Rr Z~Sb~ߊ->'JR\+^)\*gJpx/"xeӧ;2H;c޺vNyMۆdHChdrVx tOS W"*SqyߍDZ}OQEz~S*SqyߍDZ}WOEz~S*SqyύѼ aV.k Z Ʋlfb`^_C:\mG.![`vP!$5Ρ*6)BWaPyd+ѳ7l۶z&Y RRRRRTmc=Lt Z"2hN֩@4O'Ṧr Hk1ui!L!`dY' )J@VhRo-nFdGzBä+F~Χ^Zo,c.&;akɲAn=Yeݗd /BF#^U6LE>85_e*{T'7:WE\D:B܌7mJl=$}u誡|!ZhtiJV9 ߽Ufi\y~Q\X)ʚy3Q [h!rv[ l߾տ'j<1v+= лu!\捘J8)PDzk'};Vy k+ʚJ{ 5v)yƷ~>g6|I-D-džPc@a`;oޠso%X2FW2lhʒ %1o)@Iv#Χ.B4Ns+ZۮL-B)ԺxVۺ}yUYa[[ =cOi-}n)rgjqDt[nUņ}[aWm!?kTϊVvѬ#̑sڟv&<™Q)܎'j]mVEwۉ>a@"o]dVt9mDXi+Qnbʄ.n:C +ȸ;'NvRonVn`y*ˎnK1_۬% H5xduSx*3e-Gq;߽uѥ:jY 0[ 2XnW%F{㶿L[O5ץnVx?h1ξwmRn.ۃons%[?=-McN s v%)<=u-;?I`+pq,bN۬0c-{v)IL4|;~> `i )!BS ="ME={xϊ<밡_,s:6"2e.>Jw;]gmOX˙!_ͻC_q-3K8[AIPQRjINͻIrrmq:\S@Y.;(-[;ťuF=Y ^я(؆ִrT@j&FJe~L%~XmScAlDI[]ߢP=m|ݍVs296oVQ/#utR@`;orFtmI aAo\#nÚ*Ioۭ-Ͳ 1PIq- $-TʔI'MZ+hfUVcHhy;EgT9nuLGC[agT?;ڝ\6v?V+5)JP)JP)JP)JP)JP*\/$sH<֔>L!I#dm7AՋU֛Zƣʜ\/XBb,%ax]vX(]Zql.d &$l%Ou[n}Ubui'^2IS7=xA>۸nbҔRRRRRR-mkIJՁݶj¨}j[#z=^G9%vҔaՐh ZVRTp׸v=T->[ lmJ5gaaވ䴫24Xa٢ڳKG p1CDۛbN݌c5/dyɎmq%zU^;Z#Ryvk )ymƓg+*}@5v~mipnQYIK]S ul-H%a[o+ |Z{(Ѯ=uO'L7U]3aً)pmٔdX)¦ݍ",>c%Im~i:!qW,,Xm/"vJ=lLF sZb/R?AV/ m?q? m_q?7Qݼ;E`H٭XtM ~zRӽ0#Q;n;+D$KSۗ6 R.R*q\aJAQ@T~WY@>K=uhλȚjk݅U(+'ѮǸ;z&_" tpnZ-7yo[>}6+moZvqU# 疉=u CNjIJ _U}1z{l۞Kw \︧#muX֝LZDR̃Z1 9YJ ى8R; /㒸'qz:N$ݒ(`=(bG5lU"Qwi3|:wDf}eZLy,ZϟN4wP)%i "sZvYzsn:)km:J9pRJPR'LqDk5dOYޛ2i',Mn*ܥ[#r#zkBKBS1B8Pڵz`)rʌ ѮbqjrY ҠP@R6;2i',M|4EEAOٽ2 =f.+ڦb)S8ﷺ# 6UyrQՓٶgIȮֈk}u钥F}MT@A;Z#D"g-y̵v)34q^Y9JǗV,V+O FձSbYsxcuVkbd,!.3-m斗Iv+v}w&CwunI.)iΠ p͌|ļn5uce5"uQej\u,[ PA -*ZwߍNK鶙pUXKZ'!) t)T6I#.T4S[M'XVՋFT3HX줜:JOnA#i}̗m%6>qn?}OfukdSl}8J|}̗kwVK6MnO?}OfukdSl}8J|}̗kwVK6MnO emmէ?1 `)i T[嚫Y٣PŜ8 %v]JRJRJRJRJRJRJR\Rm\AfڛViAfڛViANtRWubg ^[Qcr q+~1&9]S߷}]0Ι!/+2?fʬ1܊l2c:7C>i?UVz毟wmV9ogYұ&fis57r"KS^**r$ $ u]-b2f0u CA9×lz|UkO}ݶZwmNI*ӏŊ-YBk~'ts5&AfKfŒQgHhI,HG~ߋRPiswϻZu<= {m]"`=%- ZD/( '} X4Mqwɱ0Hĭt2RBIu%;HZzUkO}ݶZwmG{Btmӷ4#i!Il2 iHjQJ S`*YfY$F[چԩ.y-uEn,jQ$iU?nv|UkO}ݶ&cD?ۨV1KZudqq~OI8ܭۿ}^]glEJ-XB'Uƿ9xtIyQGˋ%j^*>ڃg8;} " qfN~q؋iĀXB_w~:pr1h^Oҭ.`w;ۜJP߰ty 4I؎]%НmHvcΣe HJRJRJRJRJRJRJRJRJRJRJRJRUj2u3 mq\hͮ/s.OnUPL<>͕ޚT e 'J )J@-*᭙XmtK =GU!R`wA-n_y \mv+}COdT{=!A;cZOc:%01|mwඅ4VkTy) 읆ۨnE\cK-eiOH Oe؎z?_naqM?`^Q]&+j'ˮ*cI`Hw]b_kbNX}_S$HRP?GX߈k'tצY-ra!$ ;nT> S_2AYq/6B]bYn)B]@ G6(P{Ip}M\F璒HN)q"R2;Z.Cce,ctH#'W3M2AvJAJH>Ѿ*hEWuijb޶Jcⶴ -@ $P~DAnetyL:96.%,{+WzkalU tiVq-zcӐIPĥA$ݪ=6%̵ܥ2geY8h,' 㞌7m̧"tc]Ӝ!juJÐշ$ AkO7H-;Uk o Y:`;")޴Ž~|`Kћ:91+;YӢxHmHm3<6ĆS>\{CŅza}:l`ԋdݽ./6C6<FA=:݅i.ds3qg伆% *O}#sKWe]KioVVaa`7hwKw:&EyR}RPC?YRO~ZV{ݮvՌN`]L3#!.Q0 JP%Dd~R_қHe!id>dW,Ȑ2kOJ"Pi* "|BO*8;vCl`ӭפ>t6J=Ǖn0L{E_nR|6m/-QPp#w撢ʿ8eZ?)eYW bN޻:]w_r lfÐn%n?A.-¡=m]pd9%/Vi[z1ڟcZv;n#CIJY{Q|YƴVL*ڟ5:󌺘$2$ĠnTv!x[*,;V,6`,7lh;8PֆgOQeZm+͒>* Ȭq?tW. ߾fN_ e(fݕ?d,8Cy儠{T\hO[udaCli~4XB[~c$R!+ 95?xxǦdd fՀߣI&\ ̈́%7*y16c[cܤFcbK HI/ȗm1L*cȿ,Bk@eJGFJVečŗ!YY(pqK( ^}('V;ncJTsH'Ān:`mvheZaa@Z vĮ(RۿQ ^Fc˩p]D<=)7MpL lNJ@>K`a6ѲGֲ?(-?>j/|ҽHR|aSF;M:q[0,JmL斥᭔6$)@*_ ^sfk5RDӳI|àRRRRRRWKޏ)aְF}1 ,+-8̝nn9zɒaIlw<ɲGb7ԓ4x<">5H[e-bIh4;l CT4wY,+aoZ3qN[=N3YmW4x^yΨNi90ۺ_"p6oHjqٶ/0ԔԦ6`}n5o(g}Ջ'7RuIVsh'ӑЄ␒W7X5 4f*TJu!eH`8+ͰL!^{{U+ZunSUa#ǑL_45IxGM0oOk_Y8\Epjei[K HǽF2 xpҌYZ~?lyVR&2D Je%ّW' i(?T~#z,}O]Ws<>efʰe\kU-K,Y JpR;mvԐI}T U.~Q)ԌR~/gEɦYN8x>kaokjěaxSRbԅsJwA2Vvv\A6#_x_u^_1E{^,@ˀG.ܼ愂6^Rkqe+[':JVlhڙ׼}QfN=|BetҶlF㫳OmǽXN|گ595^0f ʀhNt:pomXnoT]__MeʳF7L坃smPB>?mXu}Cj\]̷N܉='5?RRRRRRRBAW?%rlY?od=)-ϒ %#(;!OpGl_MS+Ğf庑_*i=5iS&@`C'q(EvJ.W-ɲ=𫋶 )#bT'J$;S>‘+?_9?a7=.jR`Ynwqep ?sR;n~,>"2<2i}FUlۮqyp\Jy< i}#lq-x*T!u +ݰO"NޢWMJg(bQ:֩ 2oɄ"% w wdw7vޙH͎ǢFM Z$m)J 琽'd߶ OM, 1p.afb[@n:^WarNͣ]Kj~?8ŶKU1ڙimP)SRRQ}MC0"z;MHeN8D6!API#`E}31- #ȲDkt ⓳P ܑ&Vksm5 }%]G+u)!<(}zP\?,u$iCxH"&T.I$l#cGX?"zm/R w+.v__j * kϳ^d0.>C2Pv(R]-y,|V11ȱ۵;2c['?omM:'s *ڃfEz#i[ Ȇt#a-q_,Q\"ِhOFe'\qT^Z 4*+8NW-8{&-A-@^oSqj'Rg:)F? `߶7Abҫg6ռ^ݢy=طYBBLT{rQBK#snU֐0iAAK!{;Xtc;sH2L2[.6R 뺍#W<>qeL$qHT|  CRA%>>Tp"f$DLܭkt4x76 /H =4z>^'t?U(бI~"LSR-D9,wzf9Ep&Cȷxw rqqh;\A;@4PݎhK!ͶIlu ``FƂå@@1cEe)[$Cl#m}eY_`7FWiDH9u-x]wRw 1Di&C.!IKr" T: ee"g?=y{+RQk,$:YU~ByJc|H?Fˑr&@P驷TܒCVp<.cܟ4#ė)S \HGAՀFz|A;Wyeq #Ѽe:t W :i kޤ[2 (̲\D딘N4ꂀV{6åWXޥgj墙=ڇBorTU(% mn7DVp'E2t}("2azLz:^;ۆ,ZUsԬњ۴O'(U4InJ([wdnwwzjFm_Xt['ʢ]&m4H()y/p=Gz f,x-ļBJ+ Cݗ߸ IԜb-hPn?d/S߱PX0&>5Yu髚B60!MGs!gήUY̓.kOY “ ̎[ג穹d۶Ӎ=ߏ^ٹC\2!0Jɥݶ4)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)Jsahara-12.0.0/doc/source/images/sahara-architecture.svg0000664000175000017500000115000413656752032023037 0ustar zuulzuul00000000000000 image/svg+xml sahara-12.0.0/doc/source/user/0000775000175000017500000000000013656752227016115 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/user/overview.rst0000664000175000017500000000631013656752032020507 0ustar zuulzuul00000000000000Getting Started =============== Clusters -------- A cluster deployed by sahara consists of node groups. Node groups vary by their role, parameters and number of machines. The picture below illustrates an example of a Hadoop cluster consisting of 3 node groups each having a different role (set of processes). .. image:: ../images/hadoop-cluster-example.jpg Node group parameters include Hadoop parameters like ``io.sort.mb`` or ``mapred.child.java.opts``, and several infrastructure parameters like the flavor for instances or storage location (ephemeral drive or cinder volume). A cluster is characterized by its node groups and its parameters. Like a node group, a cluster has data processing framework and infrastructure parameters. An example of a cluster-wide Hadoop parameter is ``dfs.replication``. For infrastructure, an example could be image which will be used to launch cluster instances. Templates --------- In order to simplify cluster provisioning sahara employs the concept of templates. There are two kinds of templates: node group templates and cluster templates. The former is used to create node groups, the latter - clusters. Essentially templates have the very same parameters as corresponding entities. Their aim is to remove the burden of specifying all of the required parameters each time a user wants to launch a cluster. In the REST interface, templates have extended functionality. First you can specify node-scoped parameters, they will work as defaults for node groups. Also with the REST interface, during cluster creation a user can override template parameters for both cluster and node groups. Templates are portable - they can be exported to JSON files and imported either on the same deployment or on another one. To import an exported template, replace the placeholder values with appropriate ones. This can be accomplished easily through the CLI or UI, or manually editing the template file. Provisioning Plugins -------------------- A provisioning plugin is a component responsible for provisioning a data processing cluster. Generally each plugin is capable of provisioning a specific data processing framework or Hadoop distribution. Also the plugin can install management and/or monitoring tools for a cluster. Since framework configuration parameters vary depending on the distribution and the version, templates are always plugin and version specific. A template cannot be used if the plugin, or framework, versions are different than the ones they were created for. You may find the list of available plugins on that page: :doc:`plugins` Image Registry -------------- OpenStack starts VMs based on a pre-built image with an installed OS. The image requirements for sahara depend on the plugin and data processing framework version. Some plugins require just a basic cloud image and will install the framework on the instance from scratch. Some plugins might require images with pre-installed frameworks or Hadoop distributions. The Sahara Image Registry is a feature which helps filter out images during cluster creation. See :doc:`registering-image` for details on how to work with Image Registry. Features -------- Sahara has several interesting features. The full list could be found here: :doc:`features` sahara-12.0.0/doc/source/user/edp-s3.rst0000664000175000017500000000772013656752032017742 0ustar zuulzuul00000000000000============================== EDP with S3-like Object Stores ============================== Overview and rationale of S3 integration ======================================== Since the Rocky release, Sahara clusters have full support for interaction with S3-like object stores, for example Ceph Rados Gateway. Through the abstractions offered by EDP, a Sahara job execution may consume input data and job binaries stored in S3, as well as write back its output data to S3. The copying of job binaries from S3 to a cluster is performed by the botocore library. A job's input and output to and from S3 is handled by the Hadoop-S3A driver. It's also worth noting that the Hadoop-S3A driver may be more mature and performant than the Hadoop-SwiftFS driver (either as hosted by Apache or in the sahara-extra respository). Sahara clusters are also provisioned such that data in S3-like storage can also be accessed when manually interacting with the cluster; in other words: the needed libraries are properly situated. Considerations for deployers ============================ The S3 integration features can function without any specific deployment requirement. This is because the EDP S3 abstractions can point to an arbitrary S3 endpoint. Deployers may want to consider using Sahara's optional integration with secret storage to protect the S3 access and secret keys that users will provide. Also, if using Rados Gateway for S3, deployers may want to use Keystone for RGW auth so that users can simply request Keystone EC2 credentials to access RGW's S3. S3 user experience ================== Below, details about how to use the S3 integration features are discussed. EDP job binaries in S3 ---------------------- The ``url`` must be in the format ``s3://bucket/path/to/object``, similar to the format used for binaries in Swift. The ``extra`` structure must contain ``accesskey``, ``secretkey``, and ``endpoint``, which is the URL of the S3 service, including the protocol ``http`` or ``https``. As mentioned above, the binary will be copied to the cluster before execution, by use of the botocore library. This also means that the set of credentials used to access this binary may be entirely different than those for accessing a data source. EDP data sources in S3 ---------------------- The ``url`` should be in the format ``s3://bucket/path/to/object``, although upon execution the protocol will be automatically changed to ``s3a``. The ``credentials`` does not have any required values, although the following may be set: * ``accesskey`` and ``secretkey`` * ``endpoint``, which is the URL of the S3 service, without the protocl * ``ssl``, which must be a boolean * ``bucket_in_path``, to indicate whether the S3 service uses virtual-hosted-style or path-style URLs, and must be a boolean The values above are optional, as they may be set in the cluster's ``core-site.xml`` or as configuration values of the job execution, as follows, as dictated by the options understood by the Hadoop-S3A driver: * ``fs.s3a.access.key``, corresponding to ``accesskey`` * ``fs.s3a.secret.key``, corresponding to ``secretkey`` * ``fs.s3a.endpoint``, corresponding to ``endpoint`` * ``fs.s3a.connection.ssl.enabled``, corresponding to ``ssl`` * ``fs.s3a.path.style.access``, corresponding to ``bucket_in_path`` In the case of ``fs.s3a.path.style.access``, a default value is determined by the Hadoop-S3A driver if none is set: virtual-hosted-style URLs are assumed unless told otherwise, or if the endpoint is a raw IP address. Additional configuration values are supported by the Hadoop-S3A driver, and are discussed in its official documentation. It is recommended that the EDP data source abstraction is used, rather than handling bare arguments and configuration values. If any S3 configuration values are to be set at execution time, including such situations in which those values are contained by the EDP data source abstraction, then ``edp.spark.adapt_for_swift`` or ``edp.java.adapt_for_oozie`` must be set to ``true`` as appropriate. sahara-12.0.0/doc/source/user/hadoop-swift.rst0000664000175000017500000001241513656752032021250 0ustar zuulzuul00000000000000.. _swift-integration-label: Swift Integration ================= Hadoop and Swift integration are the essential continuation of the Hadoop/OpenStack marriage. The key component to making this marriage work is the Hadoop Swift filesystem implementation. Although this implementation has been merged into the upstream Hadoop project, Sahara maintains a version with the most current features enabled. * The original Hadoop patch can be found at https://issues.apache.org/jira/browse/HADOOP-8545 * The most current Sahara maintained version of this patch can be found in the `Sahara Extra repository `_ * The latest compiled version of the jar for this component can be downloaded from https://tarballs.openstack.org/sahara-extra/dist/hadoop-openstack/master/ Now the latest version of this jar (which uses Keystone API v3) is used in the plugins' images automatically during build of these images. But for Ambari plugin we need to explicitly put this jar into /opt directory of the base image **before** cluster launching. Hadoop patching --------------- You may build the jar file yourself by choosing the latest patch from the Sahara Extra repository and using Maven to build with the pom.xml file provided. Or you may get the latest jar pre-built at https://tarballs.openstack.org/sahara-extra/dist/hadoop-openstack/master/ You will need to put this file into the hadoop libraries (e.g. /usr/lib/share/hadoop/lib, it depends on the plugin which you use) on each ResourceManager and NodeManager node (for Hadoop 2.x) in the cluster. Hadoop configurations --------------------- In general, when Sahara runs a job on a cluster it will handle configuring the Hadoop installation. In cases where a user might require more in-depth configuration all the data is set in the ``core-site.xml`` file on the cluster instances using this template: .. sourcecode:: xml ${name} + ${config} ${value} ${not mandatory description} There are two types of configs here: 1. General. The ``${name}`` in this case equals to ``fs.swift``. Here is the list of ``${config}``: * ``.impl`` - Swift FileSystem implementation. The ${value} is ``org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem`` * ``.connect.timeout`` - timeout for all connections by default: 15000 * ``.socket.timeout`` - how long the connection waits for responses from servers. by default: 60000 * ``.connect.retry.count`` - connection retry count for all connections. by default: 3 * ``.connect.throttle.delay`` - delay in millis between bulk (delete, rename, copy operations). by default: 0 * ``.blocksize`` - blocksize for filesystem. By default: 32Mb * ``.partsize`` - the partition size for uploads. By default: 4608*1024Kb * ``.requestsize`` - request size for reads in KB. By default: 64Kb 2. Provider-specific. The patch for Hadoop supports different cloud providers. The ``${name}`` in this case equals to ``fs.swift.service.${provider}``. Here is the list of ``${config}``: * ``.auth.url`` - authorization URL * ``.auth.endpoint.prefix`` - prefix for the service url, e.g. ``/AUTH_`` * ``.tenant`` - project name * ``.username`` * ``.password`` * ``.domain.name`` - Domains can be used to specify users who are not in the project specified. * ``.domain.id`` - You can also specify domain using id. * ``.trust.id`` - Trusts are optionally used to scope the authentication tokens of the supplied user. * ``.http.port`` * ``.https.port`` * ``.region`` - Swift region is used when cloud has more than one Swift installation. If region param is not set first region from Keystone endpoint list will be chosen. If region param not found exception will be thrown. * ``.location-aware`` - turn On location awareness. Is false by default * ``.apikey`` * ``.public`` Example ------- For this example it is assumed that you have setup a Hadoop instance with a valid configuration and the Swift filesystem component. Furthermore there is assumed to be a Swift container named ``integration`` holding an object named ``temp``, as well as a Keystone user named ``admin`` with a password of ``swordfish``. The following example illustrates how to copy an object to a new location in the same container. We will use Hadoop's ``distcp`` command (http://hadoop.apache.org/docs/stable/hadoop-distcp/DistCp.html) to accomplish the copy. Note that the service provider for our Swift access is ``sahara``, and that we will not need to specify the project of our Swift container as it will be provided in the Hadoop configuration. Swift paths are expressed in Hadoop according to the following template: ``swift://${container}.${provider}/${object}``. For our example source this will appear as ``swift://integration.sahara/temp``. Let's run the job: .. sourcecode:: console $ hadoop distcp -D fs.swift.service.sahara.username=admin \ -D fs.swift.service.sahara.password=swordfish \ swift://integration.sahara/temp swift://integration.sahara/temp1 After that just confirm that ``temp1`` has been created in our ``integration`` container. Limitations ----------- **Note:** Please note that container names should be a valid URI. sahara-12.0.0/doc/source/user/statuses.rst0000664000175000017500000001222513656752032020516 0ustar zuulzuul00000000000000Sahara Cluster Statuses Overview ================================ All Sahara Cluster operations are performed in multiple steps. A Cluster object has a ``Status`` attribute which changes when Sahara finishes one step of operations and starts another one. Also a Cluster object has a ``Status description`` attribute which changes whenever Cluster errors occur. Sahara supports three types of Cluster operations: * Create a new Cluster * Scale/Shrink an existing Cluster * Delete an existing Cluster Creating a new Cluster ---------------------- 1. Validating ~~~~~~~~~~~~~ Before performing any operations with OpenStack environment, Sahara validates user input. There are two types of validations, that are done: * Check that a request contains all necessary fields and that the request does not violate any constraints like unique naming, etc. * Plugin check (optional). The provisioning Plugin may also perform any specific checks like a Cluster topology validation check. If any of the validations fails during creating, the Cluster object will still be kept in the database with an ``Error`` status. If any validations fails during scaling the ``Active`` Cluster, it will be kept with an ``Active`` status. In both cases status description will contain error messages about the reasons of failure. 2. InfraUpdating ~~~~~~~~~~~~~~~~ This status means that the Provisioning plugin is performing some infrastructure updates. 3. Spawning ~~~~~~~~~~~ Sahara sends requests to OpenStack for all resources to be created: * VMs * Volumes * Floating IPs (if Sahara is configured to use Floating IPs) It takes some time for OpenStack to schedule all the required VMs and Volumes, so sahara will wait until all of the VMs are in an ``Active`` state. 4. Waiting ~~~~~~~~~~ Sahara waits while VMs' operating systems boot up and all internal infrastructure components like networks and volumes are attached and ready to use. 5. Preparing ~~~~~~~~~~~~ Sahara prepares a Cluster for starting. This step includes generating the ``/etc/hosts`` file or changing ``/etc/resolv.conf`` file (if you use Designate service), so that all instances can access each other by a hostname. Also Sahara updates the ``authorized_keys`` file on each VM, so that VMs can communicate without passwords. 6. Configuring ~~~~~~~~~~~~~~ Sahara pushes service configurations to VMs. Both XML and JSON based configurations and environmental variables are set on this step. 7. Starting ~~~~~~~~~~~ Sahara is starting Hadoop services on Cluster's VMs. 8. Active ~~~~~~~~~ Active status means that a Cluster has started successfully and is ready to run EDP Jobs. Scaling/Shrinking an existing Cluster ------------------------------------- 1. Validating ~~~~~~~~~~~~~ Sahara checks the scale/shrink request for validity. The Plugin method called for performing Plugin specific checks is different from the validation method in creation. 2. Scaling ~~~~~~~~~~ Sahara performs database operations updating all affected existing Node Groups and creating new ones to join the existing Node Groups. 3. Adding Instances ~~~~~~~~~~~~~~~~~~~ Status is similar to ``Spawning`` in Cluster creation. Sahara adds required amount of VMs to the existing Node Groups and creates new Node Groups. 4. Configuring ~~~~~~~~~~~~~~ Status is similar to ``Configuring`` in Cluster creation. New instances are being configured in the same manner as already existing ones. The VMs in the existing Cluster are also updated with a new ``/etc/hosts`` file or ``/etc/resolv.conf`` file. 5. Decommissioning ~~~~~~~~~~~~~~~~~~ Sahara stops Hadoop services on VMs that will be deleted from a Cluster. Decommissioning a Data Node may take some time because Hadoop rearranges data replicas around the Cluster, so that no data will be lost after that Data Node is deleted. 6. Deleting Instances ~~~~~~~~~~~~~~~~~~~~~ Sahara sends requests to OpenStack to release unneeded resources: * VMs * Volumes * Floating IPs (if they are used) 7. Active ~~~~~~~~~ The same ``Active`` status as after Cluster creation. Deleting an existing Cluster ---------------------------- 1. Deleting ~~~~~~~~~~~ The only step, that releases all Cluster's resources and removes it from the database. 2. Force Deleting ~~~~~~~~~~~~~~~~~ In extreme cases the regular "Deleting" step may hang. Sahara APIv2 introduces the ability to force delete a Cluster. This prevents deleting from hanging but comes with the risk of orphaned resources. Error State ----------- If the Cluster creation fails, the Cluster will enter the ``Error`` state. This status means the Cluster may not be able to perform any operations normally. This cluster will stay in the database until it is manually deleted. The reason for failure may be found in the sahara logs. Also, the status description will contain information about the error. If an error occurs during the ``Adding Instances`` operation, Sahara will first try to rollback this operation. If a rollback is impossible or fails itself, then the Cluster will also go into an ``Error`` state. If a rollback was successful, Cluster will get into an ``Active`` state and status description will contain a short message about the reason of ``Adding Instances`` failure. sahara-12.0.0/doc/source/user/registering-image.rst0000664000175000017500000000207513656752032022247 0ustar zuulzuul00000000000000Registering an Image ==================== Sahara deploys a cluster of machines using images stored in Glance. Each plugin has its own requirements on the image contents (see specific plugin documentation for details). Two general requirements for an image are to have the cloud-init and the ssh-server packages installed. Sahara requires the images to be registered in the Sahara Image Registry. A registered image must have two properties set: * username - a name of the default cloud-init user. * tags - certain tags mark image to be suitable for certain plugins. The tags depend on the plugin used, you can find required tags in the plugin's documentations. The default username specified for these images is different for each distribution: +--------------+------------+ | OS | username | +==============+============+ | Ubuntu 14.04 | ubuntu | +--------------+------------+ | Ubuntu 16.04 | ubuntu | +--------------+------------+ | Fedora | fedora | +--------------+------------+ | CentOS 7.x | centos | +--------------+------------+ sahara-12.0.0/doc/source/user/features.rst0000664000175000017500000002772213656752032020471 0ustar zuulzuul00000000000000Features Overview ================= This page highlights some of the most prominent features available in sahara. The guidance provided here is primarily focused on the runtime aspects of sahara. For discussions about configuring the sahara server processes please see the :doc:`../admin/configuration-guide` and :doc:`../admin/advanced-configuration-guide`. Anti-affinity ------------- One of the problems with running data processing applications on OpenStack is the inability to control where an instance is actually running. It is not always possible to ensure that two new virtual machines are started on different physical machines. As a result, any replication within the cluster is not reliable because all replicas may be co-located on one physical machine. To remedy this, sahara provides the anti-affinity feature to explicitly command all instances of the specified processes to spawn on different Compute nodes. This is especially useful for Hadoop data node processes to increase HDFS replica reliability. Starting with the Juno release, sahara can create server groups with the ``anti-affinity`` policy to enable this feature. Sahara creates one server group per cluster and assigns all instances with affected processes to this server group. Refer to the :nova-doc:`Nova Anti-Affinity documentation ` on how server group affinity filters work. This feature is supported by all plugins out of the box, and can be enabled during the cluster template creation. Block Storage support --------------------- OpenStack Block Storage (cinder) can be used as an alternative for ephemeral drives on instances. Using Block Storage volumes increases the reliability of data which is important for HDFS services. A user can set how many volumes will be attached to each instance in a node group and the size of each volume. All volumes are attached during cluster creation and scaling operations. If volumes are used for the HDFS storage it's important to make sure that the linear read-write operations as well as IOpS level are high enough to handle the workload. Volumes placed on the same compute host provide a higher level of performance. In some cases cinder volumes can be backed by a distributed storage like Ceph. In this type of installation it's important to make sure that the network latency and speed do not become a blocker for HDFS. Distributed storage solutions usually provide their own replication mechanism. HDFS replication should be disabled so that it does not generate redundant traffic across the cloud. Cluster scaling --------------- Cluster scaling allows users to change the number of running instances in a cluster without needing to recreate the cluster. Users may increase or decrease the number of instances in node groups or add new node groups to existing clusters. If a cluster fails to scale properly, all changes will be rolled back. Data locality ------------- For optimal performance, it is best for data processing applications to work on data local to the same rack, OpenStack Compute node, or virtual machine. Hadoop supports a data locality feature and can schedule jobs to task tracker nodes that are local for the input stream. In this manner the task tracker nodes can communicate directly with the local data nodes. Sahara supports topology configuration for HDFS and Object Storage data sources. For more information on configuring this option please see the :ref:`data_locality_configuration` documentation. Volume-to-instance locality --------------------------- Having an instance and an attached volume on the same physical host can be very helpful in order to achieve high-performance disk I/O operations. To achieve this, sahara provides access to the Block Storage volume instance locality functionality. For more information on using volume instance locality with sahara, please see the :ref:`volume_instance_locality_configuration` documentation. Distributed Mode ---------------- The :doc:`../install/installation-guide` suggests launching sahara in distributed mode with ``sahara-api`` and ``sahara-engine`` processes potentially running on several machines simultaneously. Running in distributed mode allows sahara to offload intensive tasks to the engine processes while keeping the API process free to handle requests. For an expanded discussion of configuring sahara to run in distributed mode please see the :ref:`distributed-mode-configuration` documentation. Hadoop HDFS and YARN High Availability -------------------------------------- Currently HDFS and YARN HA are supported with the HDP 2.4 plugin and CDH 5.7 plugins. Hadoop HDFS and YARN High Availability provide an architecture to ensure that HDFS or YARN will continue to work in the result of an active namenode or resourcemanager failure. They use 2 namenodes and 2 resourcemanagers in an active/passive state to provide this availability. In the HDP 2.4 plugin, the feature can be enabled through dashboard in the Cluster Template creation form. High availability is achieved by using a set of journalnodes, Zookeeper servers, and ZooKeeper Failover Controllers (ZKFC), as well as additional configuration changes to HDFS and other services that use HDFS. In the CDH 5.7 plugin, HA for HDFS and YARN is enabled through adding several HDFS_JOURNALNODE roles in the node group templates of cluster template. The HDFS HA is enabled when HDFS_JOURNALNODE roles are added and the roles setup meets below requirements: * HDFS_JOURNALNODE number is odd, and at least 3. * Zookeeper is enabled. * NameNode and SecondaryNameNode are on different physical hosts by setting anti-affinity. * Cluster has both ResourceManager and StandByResourceManager. In this case, the original SecondaryNameNode node will be used as the Standby NameNode. Networking support ------------------ Sahara supports neutron implementations of OpenStack Networking. Object Storage support ---------------------- Sahara can use OpenStack Object Storage (swift) to store job binaries and data sources utilized by its job executions and clusters. In order to leverage this support within Hadoop, including using Object Storage for data sources for EDP, Hadoop requires the application of a patch. For additional information about enabling this support, including patching Hadoop and configuring sahara, please refer to the :doc:`hadoop-swift` documentation. Shared Filesystem support ------------------------- Sahara can also use NFS shares through the OpenStack Shared Filesystem service (manila) to store job binaries and data sources. See :doc:`edp` for more information on this feature. Orchestration support --------------------- Sahara may use the `OpenStack Orchestration engine `_ (heat) to provision nodes for clusters. For more information about enabling Orchestration usage in sahara please see :ref:`orchestration-configuration`. DNS support ----------- Sahara can resolve hostnames of cluster instances by using DNS. For this Sahara uses designate. For additional details see :doc:`../admin/advanced-configuration-guide`. Kerberos support ---------------- You can protect your HDP or CDH cluster using MIT Kerberos security. To get more details about this, please, see documentation for the appropriate plugin. Plugin Capabilities ------------------- The following table provides a plugin capability matrix: +--------------------------+---------+----------+----------+-------+ | Feature/Plugin | Vanilla | HDP | Cloudera | Spark | +==========================+=========+==========+==========+=======+ | Neutron network | x | x | x | x | +--------------------------+---------+----------+----------+-------+ | Cluster Scaling | x | x | x | x | +--------------------------+---------+----------+----------+-------+ | Swift Integration | x | x | x | x | +--------------------------+---------+----------+----------+-------+ | Cinder Support | x | x | x | x | +--------------------------+---------+----------+----------+-------+ | Data Locality | x | x | x | x | +--------------------------+---------+----------+----------+-------+ | DNS | x | x | x | x | +--------------------------+---------+----------+----------+-------+ | Kerberos | \- | x | x | \- | +--------------------------+---------+----------+----------+-------+ | HDFS HA | \- | x | x | \- | +--------------------------+---------+----------+----------+-------+ | EDP | x | x | x | x | +--------------------------+---------+----------+----------+-------+ Security group management ------------------------- Security groups are sets of IP filter rules that are applied to an instance's networking. They are project specified, and project members can edit the default rules for their group and add new rules sets. All projects have a "default" security group, which is applied to instances that have no other security group defined. Unless changed, this security group denies all incoming traffic. Sahara allows you to control which security groups will be used for created instances. This can be done by providing the ``security_groups`` parameter for the node group or node group template. The default for this option is an empty list, which will result in the default project security group being used for the instances. Sahara may also create a security group for instances in the node group automatically. This security group will only contain open ports for required instance processes and the sahara engine. This option is useful for development and for when your installation is secured from outside environments. For production environments we recommend controlling the security group policy manually. Shared and protected resources support -------------------------------------- Sahara allows you to create resources that can be shared across projects and protected from modifications. To provide this feature all sahara objects that can be accessed through REST API have ``is_public`` and ``is_protected`` boolean fields. They can be initially created with enabled ``is_public`` and ``is_protected`` parameters or these parameters can be updated after creation. Both fields are set to ``False`` by default. If some object has its ``is_public`` field set to ``True``, it means that it's visible not only from the project in which it was created, but from any other projects too. If some object has its ``is_protected`` field set to ``True``, it means that it can not be modified (updated, scaled, canceled or deleted) unless this field is set to ``False``. Public objects created in one project can be used from other projects (for example, a cluster can be created from a public cluster template which is created in another project), but modification operations are possible only from the project in which object was created. Data source placeholders support -------------------------------- Sahara supports special strings that can be used in data source URLs. These strings will be replaced with appropriate values during job execution which allows the use of the same data source as an output multiple times. There are 2 types of string currently supported: * ``%JOB_EXEC_ID%`` - this string will be replaced with the job execution ID. * ``%RANDSTR(len)%`` - this string will be replaced with random string of lowercase letters of length ``len``. ``len`` must be less than 1024. After placeholders are replaced, the real URLs are stored in the ``data_source_urls`` field of the job execution object. This is used later to find objects created by a particular job run. Keypair replacement ------------------- A cluster allows users to create a new keypair to access to the running cluster when the cluster's keypair is deleted. But the name of new keypair should be same as the deleted one, and the new keypair will be available for cluster scaling. sahara-12.0.0/doc/source/user/building-guest-images.rst0000664000175000017500000000321113656752032023023 0ustar zuulzuul00000000000000.. _building-guest-images-label: Building guest images ===================== Sahara plugins represent different Hadoop or other Big Data platforms and requires specific guest images. While it is possible to use cloud images which only contain the basic software requirements (also called *plain images*), their usage slows down the cluster provisioning process and was not throughly tested recently. It is strongly advised to build images which contain the software required to create the clusters for the various plugins and use them instead of *plain images*. Sahara currently provides two different tools for building guest images: - ``sahara-image-pack`` is newer and support more recent images; - ``sahara-image-create`` is the older tool. Both tools are described in the details in the next sections. The documentation of each plugin describes which method is supported for the various versions. If both are supported, ``sahara-image-pack`` is recommended. General requirements for guest images ------------------------------------- There are few common requirements for all guest images, which must be based on GNU/Linux distributions. * cloud-init must be installed * the ssh server must be installed * the firewall, if enabled, must allow connections on port 22 (ssh) The cloud images provided by the GNU/Linux distributions respect those requirements. Each plugin specifies additional requirements. The image building tools provided by Sahara take care of preparing the images with those additional requirements. .. toctree:: building-guest-images/sahara-image-pack building-guest-images/sahara-image-create building-guest-images/baremetal sahara-12.0.0/doc/source/user/index.rst0000664000175000017500000000075613656752032017760 0ustar zuulzuul00000000000000========== User Guide ========== General concepts and guides =========================== .. toctree:: :maxdepth: 2 overview quickstart dashboard-user-guide features registering-image statuses sahara-on-ironic Plugins ======= .. toctree:: :maxdepth: 2 plugins Elastic Data Processing ======================= .. toctree:: :maxdepth: 2 edp edp-s3 Guest Images ============ .. toctree:: :maxdepth: 2 building-guest-images hadoop-swift sahara-12.0.0/doc/source/user/quickstart.rst0000664000175000017500000006706513656752032021051 0ustar zuulzuul00000000000000================ Quickstart guide ================ Launching a cluster via Sahara CLI commands =========================================== This guide will help you setup a vanilla Hadoop cluster using a combination of OpenStack command line tools and the sahara :doc:`REST API <../reference/restapi>`. 1. Install sahara ----------------- * If you want to hack the code follow :doc:`../contributor/development-environment`. OR * If you just want to install and use sahara follow :doc:`../install/installation-guide`. 2. Identity service configuration --------------------------------- To use the OpenStack command line tools you should specify environment variables with the configuration details for your OpenStack installation. The following example assumes that the Identity service is at ``127.0.0.1:5000``, with a user ``admin`` in the ``admin`` project whose password is ``nova``: .. sourcecode:: console $ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/ $ export OS_PROJECT_NAME=admin $ export OS_USERNAME=admin $ export OS_PASSWORD=nova 3. Upload an image to the Image service --------------------------------------- You will need to upload a virtual machine image to the OpenStack Image service. You can build the images yourself. This guide uses the latest generated Ubuntu vanilla image, referred to as ``sahara-vanilla-latest-ubuntu.qcow2``, and the latest version of vanilla plugin as an example. Build an image which works for the specific plugin. Please refer to :ref:`building-guest-images-label` and to the plugin-specific documentation. Upload the generated image into the OpenStack Image service: .. sourcecode:: console $ openstack image create sahara-vanilla-latest-ubuntu --disk-format qcow2 \ --container-format bare --file sahara-vanilla-latest-ubuntu.qcow2 +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | checksum | 3da49911332fc46db0c5fb7c197e3a77 | | container_format | bare | | created_at | 2016-02-29T10:15:04.000000 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 71b9eeac-c904-4170-866a-1f833ea614f3 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | sahara-vanilla-latest-ubuntu | | owner | 057d23cddb864759bfa61d730d444b1f | | properties | | | protected | False | | size | 1181876224 | | status | active | | updated_at | 2016-02-29T10:15:41.000000 | | virtual_size | None | +------------------+--------------------------------------+ Remember the image name or save the image ID. This will be used during the image registration with sahara. You can get the image ID using the ``openstack`` command line tool as follows: .. sourcecode:: console $ openstack image list --property name=sahara-vanilla-latest-ubuntu +--------------------------------------+------------------------------+ | ID | Name | +--------------------------------------+------------------------------+ | 71b9eeac-c904-4170-866a-1f833ea614f3 | sahara-vanilla-latest-ubuntu | +--------------------------------------+------------------------------+ 4. Register the image with the sahara image registry ---------------------------------------------------- Now you will begin to interact with sahara by registering the virtual machine image in the sahara image registry. Register the image with the username ``ubuntu``. .. note:: The username will vary depending on the source image used. For more information, refer to the :doc:`registering-image` section. .. sourcecode:: console $ openstack dataprocessing image register sahara-vanilla-latest-ubuntu \ --username ubuntu Tag the image to inform sahara about the plugin and the version with which it shall be used. .. note:: For the steps below and the rest of this guide, substitute ```` with the appropriate version of your plugin. .. sourcecode:: console $ openstack dataprocessing image tags add sahara-vanilla-latest-ubuntu \ --tags vanilla +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | Description | None | | Id | 71b9eeac-c904-4170-866a-1f833ea614f3 | | Name | sahara-vanilla-latest-ubuntu | | Status | ACTIVE | | Tags | , vanilla | | Username | ubuntu | +-------------+--------------------------------------+ 5. Create node group templates ------------------------------ Node groups are the building blocks of clusters in sahara. Before you can begin provisioning clusters you must define a few node group templates to describe node group configurations. You can get information about available plugins with the following command: .. sourcecode:: console $ openstack dataprocessing plugin list Also you can get information about available services for a particular plugin with the ``plugin show`` command. For example: .. sourcecode:: console $ openstack dataprocessing plugin show vanilla --plugin-version +---------------------+-----------------------------------------------------------------------------------------------------------------------+ | Field | Value | +---------------------+-----------------------------------------------------------------------------------------------------------------------+ | Description | The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any | | | management consoles. It can also deploy the Oozie component. | | Name | vanilla | | Required image tags | , vanilla | | Title | Vanilla Apache Hadoop | | | | | Service: | Available processes: | | | | | HDFS | datanode, namenode, secondarynamenode | | Hadoop | | | Hive | hiveserver | | JobFlow | oozie | | Spark | spark history server | | MapReduce | historyserver | | YARN | nodemanager, resourcemanager | +---------------------+-----------------------------------------------------------------------------------------------------------------------+ .. note:: These commands assume that floating IP addresses are being used. For more details on floating IP please see :ref:`floating_ip_management`. Create a master node group template with the command: .. sourcecode:: console $ openstack dataprocessing node group template create \ --name vanilla-default-master --plugin vanilla \ --plugin-version --processes namenode resourcemanager \ --flavor 2 --auto-security-group --floating-ip-pool +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | Auto security group | True | | Availability zone | None | | Flavor id | 2 | | Floating ip pool | dbd8d1aa-6e8e-4a35-a77b-966c901464d5 | | Id | 0f066e14-9a73-4379-bbb4-9d9347633e31 | | Is boot from volume | False | | Is default | False | | Is protected | False | | Is proxy gateway | False | | Is public | False | | Name | vanilla-default-master | | Node processes | namenode, resourcemanager | | Plugin name | vanilla | | Security groups | None | | Use autoconfig | False | | Version | | | Volumes per node | 0 | +---------------------+--------------------------------------+ Create a worker node group template with the command: .. sourcecode:: console $ openstack dataprocessing node group template create \ --name vanilla-default-worker --plugin vanilla \ --plugin-version --processes datanode nodemanager \ --flavor 2 --auto-security-group --floating-ip-pool +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | Auto security group | True | | Availability zone | None | | Flavor id | 2 | | Floating ip pool | dbd8d1aa-6e8e-4a35-a77b-966c901464d5 | | Id | 6546bf44-0590-4539-bfcb-99f8e2c11efc | | Is boot from volume | False | | Is default | False | | Is protected | False | | Is proxy gateway | False | | Is public | False | | Name | vanilla-default-worker | | Node processes | datanode, nodemanager | | Plugin name | vanilla | | Security groups | None | | Use autoconfig | False | | Version | | | Volumes per node | 0 | +---------------------+--------------------------------------+ You can also create node group templates setting a flag --boot-from-volume. This will tell the node group to boot its instances from a volume instead of the image. This feature allows for easier live migrations and improved performance. .. sourcecode:: console $ openstack dataprocessing node group template create \ --name vanilla-default-worker --plugin vanilla \ --plugin-version --processes datanode nodemanager \ --flavor 2 --auto-security-group --floating-ip-pool \ --boot-from-volume +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | Auto security group | True | | Availability zone | None | | Flavor id | 2 | | Floating ip pool | dbd8d1aa-6e8e-4a35-a77b-966c901464d5 | | Id | 6546bf44-0590-4539-bfcb-99f8e2c11efc | | Is boot from volume | True | | Is default | False | | Is protected | False | | Is proxy gateway | False | | Is public | False | | Name | vanilla-default-worker | | Node processes | datanode, nodemanager | | Plugin name | vanilla | | Security groups | None | | Use autoconfig | False | | Version | | | Volumes per node | 0 | +---------------------+--------------------------------------+ Alternatively you can create node group templates from JSON files: If your environment does not use floating IPs, omit defining floating IP in the template below. Sample templates can be found here: `Sample Templates `_ Create a file named ``my_master_template_create.json`` with the following content: .. sourcecode:: json { "plugin_name": "vanilla", "hadoop_version": "", "node_processes": [ "namenode", "resourcemanager" ], "name": "vanilla-default-master", "floating_ip_pool": "", "flavor_id": "2", "auto_security_group": true } Create a file named ``my_worker_template_create.json`` with the following content: .. sourcecode:: json { "plugin_name": "vanilla", "hadoop_version": "", "node_processes": [ "nodemanager", "datanode" ], "name": "vanilla-default-worker", "floating_ip_pool": "", "flavor_id": "2", "auto_security_group": true } Use the ``openstack`` client to upload the node group templates: .. sourcecode:: console $ openstack dataprocessing node group template create \ --json my_master_template_create.json $ openstack dataprocessing node group template create \ --json my_worker_template_create.json List the available node group templates to ensure that they have been added properly: .. sourcecode:: console $ openstack dataprocessing node group template list --name vanilla-default +------------------------+--------------------------------------+-------------+--------------------+ | Name | Id | Plugin name | Version | +------------------------+--------------------------------------+-------------+--------------------+ | vanilla-default-master | 0f066e14-9a73-4379-bbb4-9d9347633e31 | vanilla | | | vanilla-default-worker | 6546bf44-0590-4539-bfcb-99f8e2c11efc | vanilla | | +------------------------+--------------------------------------+-------------+--------------------+ Remember the name or save the ID for the master and worker node group templates, as they will be used during cluster template creation. For example: * vanilla-default-master: ``0f066e14-9a73-4379-bbb4-9d9347633e31`` * vanilla-default-worker: ``6546bf44-0590-4539-bfcb-99f8e2c11efc`` 6. Create a cluster template ---------------------------- The last step before provisioning the cluster is to create a template that describes the node groups of the cluster. Create a cluster template with the command: .. sourcecode:: console $ openstack dataprocessing cluster template create \ --name vanilla-default-cluster \ --node-groups vanilla-default-master:1 vanilla-default-worker:3 +----------------+----------------------------------------------------+ | Field | Value | +----------------+----------------------------------------------------+ | Anti affinity | | | Description | None | | Id | 9d871ebd-88a9-40af-ae3e-d8c8f292401c | | Is default | False | | Is protected | False | | Is public | False | | Name | vanilla-default-cluster | | Node groups | vanilla-default-master:1, vanilla-default-worker:3 | | Plugin name | vanilla | | Use autoconfig | False | | Version | | +----------------+----------------------------------------------------+ Alternatively you can create cluster template from JSON file: Create a file named ``my_cluster_template_create.json`` with the following content: .. sourcecode:: json { "plugin_name": "vanilla", "hadoop_version": "", "node_groups": [ { "name": "worker", "count": 3, "node_group_template_id": "6546bf44-0590-4539-bfcb-99f8e2c11efc" }, { "name": "master", "count": 1, "node_group_template_id": "0f066e14-9a73-4379-bbb4-9d9347633e31" } ], "name": "vanilla-default-cluster", "cluster_configs": {} } Upload the cluster template using the ``openstack`` command line tool: .. sourcecode:: console $ openstack dataprocessing cluster template create --json my_cluster_template_create.json Remember the cluster template name or save the cluster template ID for use in the cluster provisioning command. The cluster ID can be found in the output of the creation command or by listing the cluster templates as follows: .. sourcecode:: console $ openstack dataprocessing cluster template list --name vanilla-default +-------------------------+--------------------------------------+-------------+--------------------+ | Name | Id | Plugin name | Version | +-------------------------+--------------------------------------+-------------+--------------------+ | vanilla-default-cluster | 9d871ebd-88a9-40af-ae3e-d8c8f292401c | vanilla | | +-------------------------+--------------------------------------+-------------+--------------------+ 7. Create cluster ----------------- Now you are ready to provision the cluster. This step requires a few pieces of information that can be found by querying various OpenStack services. Create a cluster with the command: .. sourcecode:: console $ openstack dataprocessing cluster create --name my-cluster-1 \ --cluster-template vanilla-default-cluster --user-keypair my_stack \ --neutron-network private --image sahara-vanilla-latest-ubuntu +----------------------------+----------------------------------------------------+ | Field | Value | +----------------------------+----------------------------------------------------+ | Anti affinity | | | Cluster template id | 9d871ebd-88a9-40af-ae3e-d8c8f292401c | | Description | | | Id | 1f0dc6f7-6600-495f-8f3a-8ac08cdb3afc | | Image | 71b9eeac-c904-4170-866a-1f833ea614f3 | | Is protected | False | | Is public | False | | Is transient | False | | Name | my-cluster-1 | | Neutron management network | fabe9dae-6fbd-47ca-9eb1-1543de325efc | | Node groups | vanilla-default-master:1, vanilla-default-worker:3 | | Plugin name | vanilla | | Status | Validating | | Use autoconfig | False | | User keypair id | my_stack | | Version | | +----------------------------+----------------------------------------------------+ Alternatively you can create a cluster template from a JSON file: Create a file named ``my_cluster_create.json`` with the following content: .. sourcecode:: json { "name": "my-cluster-1", "plugin_name": "vanilla", "hadoop_version": "", "cluster_template_id" : "9d871ebd-88a9-40af-ae3e-d8c8f292401c", "user_keypair_id": "my_stack", "default_image_id": "71b9eeac-c904-4170-866a-1f833ea614f3", "neutron_management_network": "fabe9dae-6fbd-47ca-9eb1-1543de325efc" } The parameter ``user_keypair_id`` with the value ``my_stack`` is generated by creating a keypair. You can create your own keypair in the OpenStack Dashboard, or through the ``openstack`` command line client as follows: .. sourcecode:: console $ openstack keypair create my_stack --public-key $PATH_TO_PUBLIC_KEY If sahara is configured to use neutron for networking, you will also need to include the ``--neutron-network`` argument in the ``cluster create`` command or the ``neutron_management_network`` parameter in ``my_cluster_create.json``. If your environment does not use neutron, you should omit these arguments. You can determine the neutron network id with the following command: .. sourcecode:: console $ openstack network list Create and start the cluster: .. sourcecode:: console $ openstack dataprocessing cluster create --json my_cluster_create.json Verify the cluster status by using the ``openstack`` command line tool as follows: .. sourcecode:: console $ openstack dataprocessing cluster show my-cluster-1 -c Status +--------+--------+ | Field | Value | +--------+--------+ | Status | Active | +--------+--------+ The cluster creation operation may take several minutes to complete. During this time the "status" returned from the previous command may show states other than ``Active``. A cluster also can be created with the ``wait`` flag. In that case the cluster creation command will not be finished until the cluster is moved to the ``Active`` state. 8. Run a MapReduce job to check Hadoop installation --------------------------------------------------- Check that your Hadoop installation is working properly by running an example job on the cluster manually. * Login to the NameNode (usually the master node) via ssh with the ssh-key used above: .. sourcecode:: console $ ssh -i my_stack.pem ubuntu@ * Switch to the hadoop user: .. sourcecode:: console $ sudo su hadoop * Go to the shared hadoop directory and run the simplest MapReduce example: .. sourcecode:: console $ cd /opt/hadoop-/share/hadoop/mapreduce $ /opt/hadoop-/bin/hadoop jar hadoop-mapreduce-examples-.jar pi 10 100 Congratulations! Your Hadoop cluster is ready to use, running on your OpenStack cloud. Elastic Data Processing (EDP) ============================= Job Binaries are the entities you define/upload the source code (mains and libraries) for your job. First you need to download your binary file or script to swift container and register your file in Sahara with the command: .. code:: bash (openstack) dataprocessing job binary create --url "swift://integration.sahara/hive.sql" \ --username username --password password --description "My first job binary" hive-binary Data Sources ------------ Data Sources are entities where the input and output from your jobs are housed. You can create data sources which are related to Swift, Manila or HDFS. You need to set the type of data source (swift, hdfs, manila, maprfs), name and url. The next two commands will create input and output data sources in swift. .. code:: bash $ openstack dataprocessing data source create --type swift --username admin --password admin \ --url "swift://integration.sahara/input.txt" input $ openstack dataprocessing data source create --type swift --username admin --password admin \ --url "swift://integration.sahara/output.txt" output If you want to create data sources in hdfs, use valid hdfs urls: .. code:: bash $ openstack dataprocessing data source create --type hdfs --url "hdfs://tmp/input.txt" input $ openstack dataprocessing data source create --type hdfs --url "hdfs://tmp/output.txt" output Job Templates (Jobs in API) --------------------------- In this step you need to create a job template. You have to set the type of the job template using the `type` parameter. Choose the main library using the job binary which was created in the previous step and set a name for the job template. Example of the command: .. code:: bash $ openstack dataprocessing job template create --type Hive \ --name hive-job-template --main hive-binary Jobs (Job Executions in API) ---------------------------- This is the last step in our guide. In this step you need to launch your job. You need to pass the following arguments: * The name or ID of input/output data sources for the job * The name or ID of the job template * The name or ID of the cluster on which to run the job For instance: .. code:: bash $ openstack dataprocessing job execute --input input --output output \ --job-template hive-job-template --cluster my-first-cluster You can check status of your job with the command: .. code:: bash $ openstack dataprocessing job show Once the job is marked as successful you can check the output data source. It will contain the output data of this job. Congratulations! sahara-12.0.0/doc/source/user/dashboard-user-guide.rst0000664000175000017500000004625713656752032022655 0ustar zuulzuul00000000000000Sahara (Data Processing) UI User Guide ====================================== This guide assumes that you already have the sahara service and Horizon dashboard up and running. Don't forget to make sure that sahara is registered in Keystone. If you require assistance with that, please see the `installation guide <../install/installation-guide.html>`_. The sections below give a panel by panel overview of setting up clusters and running jobs. For a description of using the guided cluster and job tools, look at `Launching a cluster via the Cluster Creation Guide`_ and `Running a job via the Job Execution Guide`_. Launching a cluster via the sahara UI ------------------------------------- Registering an Image -------------------- 1) Navigate to the "Project" dashboard, then to the "Data Processing" tab, then click on the "Clusters" panel and finally the "Image Registry" tab. 2) From that page, click on the "Register Image" button at the top right 3) Choose the image that you'd like to register with sahara 4) Enter the username of the cloud-init user on the image 5) Choose plugin and version to make the image available only for the intended clusters 6) Click the "Done" button to finish the registration Create Node Group Templates --------------------------- 1) Navigate to the "Project" dashboard, then to the "Data Processing" tab, then click on the "Clusters" panel and then the "Node Group Templates" tab. 2) From that page, click on the "Create Template" button at the top right 3) Choose your desired Plugin name and Version from the dropdowns and click "Next" 4) Give your Node Group Template a name (description is optional) 5) Choose a flavor for this template (based on your CPU/memory/disk needs) 6) Choose the storage location for your instance, this can be either "Ephemeral Drive" or "Cinder Volume". If you choose "Cinder Volume", you will need to add additional configuration 7) Switch to the Node processes tab and choose which processes should be run for all instances that are spawned from this Node Group Template 8) Click on the "Create" button to finish creating your Node Group Template Create a Cluster Template ------------------------- 1) Navigate to the "Project" dashboard, then to the "Data Processing" tab, then click on the "Clusters" panel and finally the "Cluster Templates" tab. 2) From that page, click on the "Create Template" button at the top right 3) Choose your desired Plugin name and Version from the dropdowns and click "Next" 4) Under the "Details" tab, you must give your template a name 5) Under the "Node Groups" tab, you should add one or more nodes that can be based on one or more templates - To do this, start by choosing a Node Group Template from the dropdown and click the "+" button - You can adjust the number of nodes to be spawned for this node group via the text box or the "-" and "+" buttons - Repeat these steps if you need nodes from additional node group templates 6) Optionally, you can adjust your configuration further by using the "General Parameters", "HDFS Parameters" and "MapReduce Parameters" tabs 7) If you have Designate DNS service you can choose the domain name in "DNS" tab for internal and external hostname resolution 8) Click on the "Create" button to finish creating your Cluster Template Launching a Cluster ------------------- 1) Navigate to the "Project" dashboard, then to the "Data Processing" tab, then click on the "Clusters" panel and lastly, click on the "Clusters" tab. 2) Click on the "Launch Cluster" button at the top right 3) Choose your desired Plugin name and Version from the dropdowns and click "Next" 4) Give your cluster a name (required) 5) Choose which cluster template should be used for your cluster 6) Choose the image that should be used for your cluster (if you do not see any options here, see `Registering an Image`_ above) 7) Optionally choose a keypair that can be used to authenticate to your cluster instances 8) Click on the "Create" button to start your cluster - Your cluster's status will display on the Clusters table - It will likely take several minutes to reach the "Active" state Scaling a Cluster ----------------- 1) From the Data Processing/Clusters page (Clusters tab), click on the "Scale Cluster" button of the row that contains the cluster that you want to scale 2) You can adjust the numbers of instances for existing Node Group Templates 3) You can also add a new Node Group Template and choose a number of instances to launch - This can be done by selecting your desired Node Group Template from the dropdown and clicking the "+" button - Your new Node Group will appear below and you can adjust the number of instances via the text box or the "+" and "-" buttons 4) To confirm the scaling settings and trigger the spawning/deletion of instances, click on "Scale" Elastic Data Processing (EDP) ----------------------------- Data Sources ------------ Data Sources are where the input and output from your jobs are housed. 1) From the Data Processing/Jobs page (Data Sources tab), click on the "Create Data Source" button at the top right 2) Give your Data Source a name 3) Enter the URL of the Data Source - For a swift object, enter / (ie: *mycontainer/inputfile*). sahara will prepend *swift://* for you - For an HDFS object, enter an absolute path, a relative path or a full URL: + */my/absolute/path* indicates an absolute path in the cluster HDFS + *my/path* indicates the path */user/hadoop/my/path* in the cluster HDFS assuming the defined HDFS user is *hadoop* + *hdfs://host:port/path* can be used to indicate any HDFS location 4) Enter the username and password for the Data Source (also see `Additional Notes`_) 5) Enter an optional description 6) Click on "Create" 7) Repeat for additional Data Sources Job Binaries ------------ Job Binaries are where you define/upload the source code (mains and libraries) for your job. 1) From the Data Processing/Jobs (Job Binaries tab), click on the "Create Job Binary" button at the top right 2) Give your Job Binary a name (this can be different than the actual filename) 3) Choose the type of storage for your Job Binary - For "swift", enter the URL of your binary (/) as well as the username and password (also see `Additional Notes`_) - For "manila", choose the share and enter the path for the binary in this share. This assumes that you have already stored that file in the appropriate path on the share. The share will be automatically mounted to any cluster nodes which require access to the file, if it is not mounted already. - For "Internal database", you can choose from "Create a script" or "Upload a new file" (**only API v1.1**) 4) Enter an optional description 5) Click on "Create" 6) Repeat for additional Job Binaries Job Templates (Known as "Jobs" in the API) ------------------------------------------ Job templates are where you define the type of job you'd like to run as well as which "Job Binaries" are required. 1) From the Data Processing/Jobs page (Job Templates tab), click on the "Create Job Template" button at the top right 2) Give your Job Template a name 3) Choose the type of job you'd like to run 4) Choose the main binary from the dropdown - This is required for Hive, Pig, and Spark jobs - Other job types do not use a main binary 5) Enter an optional description for your Job Template 6) Click on the "Libs" tab and choose any libraries needed by your job template - MapReduce and Java jobs require at least one library - Other job types may optionally use libraries 7) Click on "Create" Jobs (Known as "Job Executions" in the API) ------------------------------------------- Jobs are what you get by "Launching" a job template. You can monitor the status of your job to see when it has completed its run 1) From the Data Processing/Jobs page (Job Templates tab), find the row that contains the job template you want to launch and click either "Launch on New Cluster" or "Launch on Existing Cluster" the right side of that row 2) Choose the cluster (already running--see `Launching a Cluster`_ above) on which you would like the job to run 3) Choose the Input and Output Data Sources (Data Sources defined above) 4) If additional configuration is required, click on the "Configure" tab - Additional configuration properties can be defined by clicking on the "Add" button - An example configuration entry might be mapred.mapper.class for the Name and org.apache.oozie.example.SampleMapper for the Value 5) Click on "Launch". To monitor the status of your job, you can navigate to the Data Processing/Jobs panel and click on the Jobs tab. 6) You can relaunch a Job from the Jobs page by using the "Relaunch on New Cluster" or "Relaunch on Existing Cluster" links - Relaunch on New Cluster will take you through the forms to start a new cluster before letting you specify input/output Data Sources and job configuration - Relaunch on Existing Cluster will prompt you for input/output Data Sources as well as allow you to change job configuration before launching the job Example Jobs ------------ There are sample jobs located in the sahara repository. In this section, we will give a walkthrough on how to run those jobs via the Horizon UI. These steps assume that you already have a cluster up and running (in the "Active" state). You may want to clone into https://opendev.org/openstack/sahara-tests/ so that you will have all of the source code and inputs stored locally. 1) Sample Pig job - https://opendev.org/openstack/sahara-tests/src/branch/master/sahara_tests/scenario/defaults/edp-examples/edp-pig/cleanup-string/example.pig - Load the input data file from https://opendev.org/openstack/sahara-tests/src/branch/master/sahara_tests/scenario/defaults/edp-examples/edp-pig/cleanup-string/data/input into swift - Click on Project/Object Store/Containers and create a container with any name ("samplecontainer" for our purposes here) - Click on Upload Object and give the object a name ("piginput" in this case) - Navigate to Data Processing/Jobs/Data Sources, Click on Create Data Source - Name your Data Source ("pig-input-ds" in this sample) - Type = Swift, URL samplecontainer/piginput, fill-in the Source username/password fields with your username/password and click "Create" - Create another Data Source to use as output for the job - Name = pig-output-ds, Type = Swift, URL = samplecontainer/pigoutput, Source username/password, "Create" - Store your Job Binaries in Swift (you can choose another type of storage if you want) - Navigate to Project/Object Store/Containers, choose "samplecontainer" - Click on Upload Object and find example.pig at /sahara-tests/scenario/defaults/edp-examples/ edp-pig/cleanup-string/, name it "example.pig" (or other name). The Swift path will be swift://samplecontainer/example.pig - Click on Upload Object and find edp-pig-udf-stringcleaner.jar at /sahara-tests/scenario/defaults/edp-examples/ edp-pig/cleanup-string/, name it "edp-pig-udf-stringcleaner.jar" (or other name). The Swift path will be swift://samplecontainer/edp-pig-udf-stringcleaner.jar - Navigate to Data Processing/Jobs/Job Binaries, Click on Create Job Binary - Name = example.pig, Storage type = Swift, URL = samplecontainer/example.pig, Username = , Password = - Create another Job Binary: Name = edp-pig-udf-stringcleaner.jar, Storage type = Swift, URL = samplecontainer/edp-pig-udf-stringcleaner.jar, Username = , Password = - Create a Job Template - Navigate to Data Processing/Jobs/Job Templates, Click on Create Job Template - Name = pigsample, Job Type = Pig, Choose "example.pig" as the main binary - Click on the "Libs" tab and choose "edp-pig-udf-stringcleaner.jar", then hit the "Choose" button beneath the dropdown, then click on "Create" - Launch your job - To launch your job from the Job Templates page, click on the down arrow at the far right of the screen and choose "Launch on Existing Cluster" - For the input, choose "pig-input-ds", for output choose "pig-output-ds". Also choose whichever cluster you'd like to run the job on - For this job, no additional configuration is necessary, so you can just click on "Launch" - You will be taken to the "Jobs" page where you can see your job progress through "PENDING, RUNNING, SUCCEEDED" phases - When your job finishes with "SUCCEEDED", you can navigate back to Object Store/Containers and browse to the samplecontainer to see your output. It should be in the "pigoutput" folder 2) Sample Spark job - https://opendev.org/openstack/sahara-tests/src/branch/master/sahara_tests/scenario/defaults/edp-examples/edp-spark You can clone into https://opendev.org/openstack/sahara-tests/ for quicker access to the files for this sample job. - Store the Job Binary in Swift (you can choose another type of storage if you want) - Click on Project/Object Store/Containers and create a container with any name ("samplecontainer" for our purposes here) - Click on Upload Object and find spark-wordcount.jar at /sahara-tests/scenario/defaults/edp-examples/ edp-spark/, name it "spark-wordcount.jar" (or other name). The Swift path will be swift://samplecontainer/spark-wordcount.jar - Navigate to Data Processing/Jobs/Job Binaries, Click on Create Job Binary - Name = sparkexample.jar, Storage type = Swift, URL = samplecontainer/spark-wordcount.jar, Username = , Password = - Create a Job Template - Name = sparkexamplejob, Job Type = Spark, Main binary = Choose sparkexample.jar, Click "Create" - Launch your job - To launch your job from the Job Templates page, click on the down arrow at the far right of the screen and choose "Launch on Existing Cluster" - Choose whichever cluster you'd like to run the job on - Click on the "Configure" tab - Set the main class to be: sahara.edp.spark.SparkWordCount - Under Arguments, click Add and fill url for the input file, once more click Add and fill url for the output file. - Click on Launch - You will be taken to the "Jobs" page where you can see your job progress through "PENDING, RUNNING, SUCCEEDED" phases - When your job finishes with "SUCCEEDED", you can see your results in your output file. - The stdout and stderr files of the command used for executing your job are located at /tmp/spark-edp// on Spark master node in case of Spark clusters, or on Spark JobHistory node in other cases like Vanilla, CDH and so on. Additional Notes ---------------- 1) Throughout the sahara UI, you will find that if you try to delete an object that you will not be able to delete it if another object depends on it. An example of this would be trying to delete a Job Template that has an existing Job. In order to be able to delete that job, you would first need to delete any Job Templates that relate to that job. 2) In the examples above, we mention adding your username/password for the swift Data Sources. It should be noted that it is possible to configure sahara such that the username/password credentials are *not* required. For more information on that, please refer to: :doc:`Sahara Advanced Configuration Guide <../admin/advanced-configuration-guide>` Launching a cluster via the Cluster Creation Guide -------------------------------------------------- 1) Under the Data Processing group, choose "Clusters" and then click on the "Clusters" tab. The "Cluster Creation Guide" button is above that table. Click on it. 2) Click on the "Choose Plugin" button then select the cluster type from the Plugin Name dropdown and choose your target version. When done, click on "Select" to proceed. 3) Click on "Create a Master Node Group Template". Give your template a name, choose a flavor and choose which processes should run on nodes launched for this node group. The processes chosen here should be things that are more server-like in nature (namenode, oozieserver, spark master, etc). Optionally, you can set other options here such as availability zone, storage, security and process specific parameters. Click on "Create" to proceed. 4) Click on "Create a Worker Node Group Template". Give your template a name, choose a flavor and choose which processes should run on nodes launched for this node group. Processes chosen here should be more worker-like in nature (datanode, spark slave, task tracker, etc). Optionally, you can set other options here such as availability zone, storage, security and process specific parameters. Click on "Create" to proceed. 5) Click on "Create a Cluster Template". Give your template a name. Next, click on the "Node Groups" tab and enter the count for each of the node groups (these are pre-populated from steps 3 and 4). It would be common to have 1 for the "master" node group type and some larger number of "worker" instances depending on you desired cluster size. Optionally, you can also set additional parameters for cluster-wide settings via the other tabs on this page. Click on "Create" to proceed. 6) Click on "Launch a Cluster". Give your cluster a name and choose the image that you want to use for all instances in your cluster. The cluster template that you created in step 5 is already pre-populated. If you want ssh access to the instances of your cluster, select a keypair from the dropdown. Click on "Launch" to proceed. You will be taken to the Clusters panel where you can see your cluster progress toward the Active state. Running a job via the Job Execution Guide ----------------------------------------- 1) Under the Data Processing group, choose "Jobs" and then click on the "Jobs" tab. The "Job Execution Guide" button is above that table. Click on it. 2) Click on "Select type" and choose the type of job that you want to run. 3) If your job requires input/output data sources, you will have the option to create them via the "Create a Data Source" button (Note: This button will not be shown for job types that do not require data sources). Give your data source a name and choose the type. If you have chosen swift, you may also enter the username and password. Enter the URL for your data source. For more details on what the URL should look like, see `Data Sources`_. 4) Click on "Create a job template". Give your job template a name. Depending on the type of job that you've chosen, you may need to select your main binary and/or additional libraries (available from the "Libs" tab). If you have not yet uploaded the files to run your program, you can add them via the "+" icon next to the "Choose a main binary" select box. 5) Click on "Launch job". Choose the active cluster where you want to run you job. Optionally, you can click on the "Configure" tab and provide any required configuration, arguments or parameters for your job. Click on "Launch" to execute your job. You will be taken to the Jobs tab where you can monitor the state of your job as it progresses. sahara-12.0.0/doc/source/user/plugins.rst0000664000175000017500000000366113656752032020330 0ustar zuulzuul00000000000000Provisioning Plugins ==================== This page lists all available provisioning plugins. In general a plugin enables sahara to deploy a specific data processing framework (for example, Hadoop) or distribution, and allows configuration of topology and management/monitoring tools. The plugins currently developed as part of the official Sahara project are: * :sahara-plugin-ambari-doc:`Ambari Plugin <>` - deploys Hortonworks Data Platform * :sahara-plugin-cdh-doc:`CDH Plugin <>` - deploys Cloudera Hadoop * :sahara-plugin-mapr-doc:`MapR Plugin <>` - deploys MapR plugin with MapR File System * :sahara-plugin-spark-doc:`Spark Plugin <>` - deploys Apache Spark with Cloudera HDFS * :sahara-plugin-storm-doc:`Storm Plugin <>` - deploys Apache Storm * :sahara-plugin-vanilla-doc:`Vanilla Plugin <>` - deploys Vanilla Apache Hadoop Managing plugins ---------------- Since the Newton release a project admin can configure plugins by specifying additional values for plugin's labels. To disable a plugin (Vanilla Apache Hadoop, for example), the admin can run the following command: .. sourcecode:: console cat update_configs.json { "plugin_labels": { "enabled": { "status": true } } } openstack dataprocessing plugin update vanilla update_configs.json Additionally, specific versions can be disabled by the following command: .. sourcecode:: console cat update_configs.json { "version_labels": { "2.7.1": { "enabled": { "status": true } } } } openstack dataprocessing plugin update vanilla update_configs.json Finally, to see all labels of a specific plugin and to see the current status of the plugin (is it stable or not, deprecation status) the following command can be executed from the CLI: .. sourcecode:: console openstack dataprocessing plugin show vanilla The same actions are available from UI respectively. sahara-12.0.0/doc/source/user/edp.rst0000664000175000017500000007612313656752032017422 0ustar zuulzuul00000000000000Elastic Data Processing (EDP) ============================= Overview -------- Sahara's Elastic Data Processing facility or :dfn:`EDP` allows the execution of jobs on clusters created from sahara. EDP supports: * Hive, Pig, MapReduce, MapReduce.Streaming, Java, and Shell job types on Hadoop clusters * Spark jobs on Spark standalone clusters, MapR (v5.0.0 - v5.2.0) clusters, Vanilla clusters (v2.7.1) and CDH clusters (v5.3.0 or higher). * storage of job binaries in the OpenStack Object Storage service (swift), the OpenStack Shared file systems service (manila), sahara's own database, or any S3-like object store * access to input and output data sources in + HDFS for all job types + swift for all types excluding Hive + manila (NFS shares only) for all types excluding Pig + Any S3-like object store * configuration of jobs at submission time * execution of jobs on existing clusters or transient clusters Interfaces ---------- The EDP features can be used from the sahara web UI which is described in the :doc:`dashboard-user-guide`. The EDP features also can be used directly by a client through the `REST api `_ EDP Concepts ------------ Sahara EDP uses a collection of simple objects to define and execute jobs. These objects are stored in the sahara database when they are created, allowing them to be reused. This modular approach with database persistence allows code and data to be reused across multiple jobs. The essential components of a job are: * executable code to run * input and output data paths, as needed for the job * any additional configuration values needed for the job run These components are supplied through the objects described below. Job Binaries ++++++++++++ A :dfn:`Job Binary` object stores a URL to a single script or Jar file and any credentials needed to retrieve the file. The file itself may be stored in the sahara internal database (**only API v1.1**), in swift, or in manila. Files in the sahara database are stored as raw bytes in a :dfn:`Job Binary Internal` object. This object's sole purpose is to store a file for later retrieval. No extra credentials need to be supplied for files stored internally. Sahara requires credentials (username and password) to access files stored in swift unless swift proxy users are configured as described in :doc:`../admin/advanced-configuration-guide`. The swift service must be running in the same OpenStack installation referenced by sahara. Sahara requires the following credentials/configs to access files stored in an S3-like object store: ``accesskey``, ``secretkey``, ``endpoint``. These credentials are specified through the `extra` in the body of the request when creating a job binary referencing S3. The value of ``endpoint`` should include a protocol: *http* or *https*. To reference a binary file stored in manila, create the job binary with the URL ``manila://{share_id}/{path}``. This assumes that you have already stored that file in the appropriate path on the share. The share will be automatically mounted to any cluster nodes which require access to the file, if it is not mounted already. There is a configurable limit on the size of a single job binary that may be retrieved by sahara. This limit is 5MB and may be set with the *job_binary_max_KB* setting in the :file:`sahara.conf` configuration file. Jobs ++++ A :dfn:`Job` object specifies the type of the job and lists all of the individual Job Binary objects that are required for execution. An individual Job Binary may be referenced by multiple Jobs. A Job object specifies a main binary and/or supporting libraries depending on its type: +-------------------------+-------------+-----------+ | Job type | Main binary | Libraries | +=========================+=============+===========+ | ``Hive`` | required | optional | +-------------------------+-------------+-----------+ | ``Pig`` | required | optional | +-------------------------+-------------+-----------+ | ``MapReduce`` | not used | required | +-------------------------+-------------+-----------+ | ``MapReduce.Streaming`` | not used | optional | +-------------------------+-------------+-----------+ | ``Java`` | not used | required | +-------------------------+-------------+-----------+ | ``Shell`` | required | optional | +-------------------------+-------------+-----------+ | ``Spark`` | required | optional | +-------------------------+-------------+-----------+ | ``Storm`` | required | not used | +-------------------------+-------------+-----------+ | ``Storm Pyelus`` | required | not used | +-------------------------+-------------+-----------+ Data Sources ++++++++++++ A :dfn:`Data Source` object stores a URL which designates the location of input or output data and any credentials needed to access the location. Sahara supports data sources in swift. The swift service must be running in the same OpenStack installation referenced by sahara. Sahara also supports data sources in HDFS. Any HDFS instance running on a sahara cluster in the same OpenStack installation is accessible without manual configuration. Other instances of HDFS may be used as well provided that the URL is resolvable from the node executing the job. Sahara supports data sources in manila as well. To reference a path on an NFS share as a data source, create the data source with the URL ``manila://{share_id}/{path}``. As in the case of job binaries, the specified share will be automatically mounted to your cluster's nodes as needed to access the data source. Finally, Sahara supports data sources referring to S3-like object stores. The URL should be of the form ``s3://{bucket}/{path}``. Also, the following credentials/configs are understood: ``accesskey``, ``secretkey``, ``endpoint``, ``bucket_in_path``, and ``ssl``. These credentials are specified through the ``credentials`` attribute of the body of the request when creating a data source referencing S3. The value of ``endpoint`` should **NOT** include a protocol (*http* or *https*), unlike when referencing an S3 job binary. It can also be noted that Sahara clusters can interact with S3-like stores even when not using EDP, i.e. when manually operating the cluster instead. Consult the `hadoop-aws documentation `_ for more information. Also, be advised that hadoop-aws will only write a job's output into a bucket which already exists: it does not create new buckets. Some job types require the use of data source objects to specify input and output when a job is launched. For example, when running a Pig job the UI will prompt the user for input and output data source objects. Other job types like Java or Spark do not require the user to specify data sources. For these job types, data paths are passed as arguments. For convenience, sahara allows data source objects to be referenced by name or id. The section `Using Data Source References as Arguments`_ gives further details. Job Execution +++++++++++++ Job objects must be *launched* or *executed* in order for them to run on the cluster. During job launch, a user specifies execution details including data sources, configuration values, and program arguments. The relevant details will vary by job type. The launch will create a :dfn:`Job Execution` object in sahara which is used to monitor and manage the job. To execute Hadoop jobs, sahara generates an Oozie workflow and submits it to the Oozie server running on the cluster. Familiarity with Oozie is not necessary for using sahara but it may be beneficial to the user. A link to the Oozie web console can be found in the sahara web UI in the cluster details. For Spark jobs, sahara uses the *spark-submit* shell script and executes the Spark job from the master node in case of Spark cluster and from the Spark Job History server in other cases. Logs of spark jobs run by sahara can be found on this node under the */tmp/spark-edp* directory. .. _edp_workflow: General Workflow ---------------- The general workflow for defining and executing a job in sahara is essentially the same whether using the web UI or the REST API. 1. Launch a cluster from sahara if there is not one already available 2. Create all of the Job Binaries needed to run the job, stored in the sahara database, in swift, or in manila + When using the REST API and internal storage of job binaries, the Job Binary Internal objects must be created first + Once the Job Binary Internal objects are created, Job Binary objects may be created which refer to them by URL 3. Create a Job object which references the Job Binaries created in step 2 4. Create an input Data Source which points to the data you wish to process 5. Create an output Data Source which points to the location for output data 6. Create a Job Execution object specifying the cluster and Job object plus relevant data sources, configuration values, and program arguments + When using the web UI this is done with the :guilabel:`Launch On Existing Cluster` or :guilabel:`Launch on New Cluster` buttons on the Jobs tab + When using the REST API this is done via the */jobs//execute* method The workflow is simpler when using existing objects. For example, to construct a new job which uses existing binaries and input data a user may only need to perform steps 3, 5, and 6 above. Of course, to repeat the same job multiple times a user would need only step 6. Specifying Configuration Values, Parameters, and Arguments ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Jobs can be configured at launch. The job type determines the kinds of values that may be set: +--------------------------+---------------+------------+-----------+ | Job type | Configuration | Parameters | Arguments | | | Values | | | +==========================+===============+============+===========+ | ``Hive`` | Yes | Yes | No | +--------------------------+---------------+------------+-----------+ | ``Pig`` | Yes | Yes | Yes | +--------------------------+---------------+------------+-----------+ | ``MapReduce`` | Yes | No | No | +--------------------------+---------------+------------+-----------+ | ``MapReduce.Streaming`` | Yes | No | No | +--------------------------+---------------+------------+-----------+ | ``Java`` | Yes | No | Yes | +--------------------------+---------------+------------+-----------+ | ``Shell`` | Yes | Yes | Yes | +--------------------------+---------------+------------+-----------+ | ``Spark`` | Yes | No | Yes | +--------------------------+---------------+------------+-----------+ | ``Storm`` | Yes | No | Yes | +--------------------------+---------------+------------+-----------+ | ``Storm Pyelus`` | Yes | No | Yes | +--------------------------+---------------+------------+-----------+ * :dfn:`Configuration values` are key/value pairs. + The EDP configuration values have names beginning with *edp.* and are consumed by sahara + Other configuration values may be read at runtime by Hadoop jobs + Currently additional configuration values are not available to Spark jobs at runtime * :dfn:`Parameters` are key/value pairs. They supply values for the Hive and Pig parameter substitution mechanisms. In Shell jobs, they are passed as environment variables. * :dfn:`Arguments` are strings passed as command line arguments to a shell or main program These values can be set on the :guilabel:`Configure` tab during job launch through the web UI or through the *job_configs* parameter when using the */jobs//execute* REST method. In some cases sahara generates configuration values or parameters automatically. Values set explicitly by the user during launch will override those generated by sahara. Using Data Source References as Arguments +++++++++++++++++++++++++++++++++++++++++ Sometimes it's necessary or desirable to pass a data path as an argument to a job. In these cases, a user may simply type out the path as an argument when launching a job. If the path requires credentials, the user can manually add the credentials as configuration values. However, if a data source object has been created that contains the desired path and credentials there is no need to specify this information manually. As a convenience, sahara allows data source objects to be referenced by name or id in arguments, configuration values, or parameters. When the job is executed, sahara will replace the reference with the path stored in the data source object and will add any necessary credentials to the job configuration. Referencing an existing data source object is much faster than adding this information by hand. This is particularly useful for job types like Java or Spark that do not use data source objects directly. There are two job configuration parameters that enable data source references. They may be used with any job type and are set on the ``Configuration`` tab when the job is launched: * ``edp.substitute_data_source_for_name`` (default **False**) If set to **True**, causes sahara to look for data source object name references in configuration values, arguments, and parameters when a job is launched. Name references have the form **datasource://name_of_the_object**. For example, assume a user has a WordCount application that takes an input path as an argument. If there is a data source object named **my_input**, a user may simply set the **edp.substitute_data_source_for_name** configuration parameter to **True** and add **datasource://my_input** as an argument when launching the job. * ``edp.substitute_data_source_for_uuid`` (default **False**) If set to **True**, causes sahara to look for data source object ids in configuration values, arguments, and parameters when a job is launched. A data source object id is a uuid, so they are unique. The id of a data source object is available through the UI or the sahara command line client. A user may simply use the id as a value. Creating an Interface for Your Job ++++++++++++++++++++++++++++++++++ In order to better document your job for cluster operators (or for yourself in the future), sahara allows the addition of an interface (or method signature) to your job template. A sample interface for the Teragen Hadoop example might be: +---------+---------+-----------+-------------+----------+--------------------+ | Name | Mapping | Location | Value | Required | Default | | | Type | | Type | | | +=========+=========+===========+=============+==========+====================+ | Example | args | 0 | string | false | teragen | | Class | | | | | | +---------+---------+-----------+-------------+----------+--------------------+ | Rows | args | 1 | number | true | unset | +---------+---------+-----------+-------------+----------+--------------------+ | Output | args | 2 | data_source | false | hdfs://ip:port/path| | Path | | | | | | +---------+---------+-----------+-------------+----------+--------------------+ | Mapper | configs | mapred. | number | false | unset | | Count | | map.tasks | | | | +---------+---------+-----------+-------------+----------+--------------------+ A "Description" field may also be added to each interface argument. To create such an interface via the REST API, provide an "interface" argument, the value of which consists of a list of JSON objects, as below: .. sourcecode:: json [ { "name": "Example Class", "description": "Indicates which example job class should be used.", "mapping_type": "args", "location": "0", "value_type": "string", "required": false, "default": "teragen" }, ] Creating this interface would allow you to specify a configuration for any execution of the job template by passing an "interface" map similar to: .. sourcecode:: json { "Rows": "1000000", "Mapper Count": "3", "Output Path": "hdfs://mycluster:8020/user/myuser/teragen-output" } The specified arguments would be automatically placed into the args, configs, and params for the job, according to the mapping type and location fields of each interface argument. The final ``job_configs`` map would be: .. sourcecode:: json { "job_configs": { "configs": { "mapred.map.tasks": "3" }, "args": [ "teragen", "1000000", "hdfs://mycluster:8020/user/myuser/teragen-output" ] } } Rules for specifying an interface are as follows: - Mapping Type must be one of ``configs``, ``params``, or ``args``. Only types supported for your job type are allowed (see above.) - Location must be a string for ``configs`` and ``params``, and an integer for ``args``. The set of ``args`` locations must be an unbroken series of integers starting from 0. - Value Type must be one of ``string``, ``number``, or ``data_source``. Data sources may be passed as UUIDs or as valid paths (see above.) All values should be sent as JSON strings. (Note that booleans and null values are serialized differently in different languages. Please specify them as a string representation of the appropriate constants for your data processing engine.) - ``args`` that are not required must be given a default value. The additional one-time complexity of specifying an interface on your template allows a simpler repeated execution path, and also allows us to generate a customized form for your job in the Horizon UI. This may be particularly useful in cases in which an operator who is not a data processing job developer will be running and administering the jobs. Generation of Swift Properties for Data Sources +++++++++++++++++++++++++++++++++++++++++++++++ If swift proxy users are not configured (see :doc:`../admin/advanced-configuration-guide`) and a job is run with data source objects containing swift paths, sahara will automatically generate swift username and password configuration values based on the credentials in the data sources. If the input and output data sources are both in swift, it is expected that they specify the same credentials. The swift credentials may be set explicitly with the following configuration values: +------------------------------------+ | Name | +====================================+ | fs.swift.service.sahara.username | +------------------------------------+ | fs.swift.service.sahara.password | +------------------------------------+ Setting the swift credentials explicitly is required when passing literal swift paths as arguments instead of using data source references. When possible, use data source references as described in `Using Data Source References as Arguments`_. Additional Details for Hive jobs ++++++++++++++++++++++++++++++++ Sahara will automatically generate values for the ``INPUT`` and ``OUTPUT`` parameters required by Hive based on the specified data sources. Additional Details for Pig jobs +++++++++++++++++++++++++++++++ Sahara will automatically generate values for the ``INPUT`` and ``OUTPUT`` parameters required by Pig based on the specified data sources. For Pig jobs, ``arguments`` should be thought of as command line arguments separated by spaces and passed to the ``pig`` shell. ``Parameters`` are a shorthand and are actually translated to the arguments ``-param name=value`` Additional Details for MapReduce jobs +++++++++++++++++++++++++++++++++++++ **Important!** If the job type is MapReduce, the mapper and reducer classes *must* be specified as configuration values. Note that the UI will not prompt the user for these required values; they must be added manually with the ``Configure`` tab. Make sure to add these values with the correct names: +-----------------------------+----------------------------------------+ | Name | Example Value | +=============================+========================================+ | mapred.mapper.new-api | true | +-----------------------------+----------------------------------------+ | mapred.reducer.new-api | true | +-----------------------------+----------------------------------------+ | mapreduce.job.map.class | org.apache.oozie.example.SampleMapper | +-----------------------------+----------------------------------------+ | mapreduce.job.reduce.class | org.apache.oozie.example.SampleReducer | +-----------------------------+----------------------------------------+ Additional Details for MapReduce.Streaming jobs +++++++++++++++++++++++++++++++++++++++++++++++ **Important!** If the job type is MapReduce.Streaming, the streaming mapper and reducer classes *must* be specified. In this case, the UI *will* prompt the user to enter mapper and reducer values on the form and will take care of adding them to the job configuration with the appropriate names. If using the python client, however, be certain to add these values to the job configuration manually with the correct names: +-------------------------+---------------+ | Name | Example Value | +=========================+===============+ | edp.streaming.mapper | /bin/cat | +-------------------------+---------------+ | edp.streaming.reducer | /usr/bin/wc | +-------------------------+---------------+ Additional Details for Java jobs ++++++++++++++++++++++++++++++++ Data Source objects are not used directly with Java job types. Instead, any input or output paths must be specified as arguments at job launch either explicitly or by reference as described in `Using Data Source References as Arguments`_. Using data source references is the recommended way to pass paths to Java jobs. If configuration values are specified, they must be added to the job's Hadoop configuration at runtime. There are two methods of doing this. The simplest way is to use the **edp.java.adapt_for_oozie** option described below. The other method is to use the code from `this example `_ to explicitly load the values. The following special configuration values are read by sahara and affect how Java jobs are run: * ``edp.java.main_class`` (required) Specifies the full name of the class containing ``main(String[] args)`` A Java job will execute the **main** method of the specified main class. Any arguments set during job launch will be passed to the program through the **args** array. * ``oozie.libpath`` (optional) Specifies configuration values for the Oozie share libs, these libs can be shared by different workflows * ``edp.java.java_opts`` (optional) Specifies configuration values for the JVM * ``edp.java.adapt_for_oozie`` (optional) Specifies that sahara should perform special handling of configuration values and exit conditions. The default is **False**. If this configuration value is set to **True**, sahara will modify the job's Hadoop configuration before invoking the specified **main** method. Any configuration values specified during job launch (excluding those beginning with **edp.**) will be automatically set in the job's Hadoop configuration and will be available through standard methods. Secondly, setting this option to **True** ensures that Oozie will handle program exit conditions correctly. At this time, the following special configuration value only applies when running jobs on a cluster generated by the Cloudera plugin with the **Enable Hbase Common Lib** cluster config set to **True** (the default value): * ``edp.hbase_common_lib`` (optional) Specifies that a common Hbase lib generated by sahara in HDFS be added to the **oozie.libpath**. This for use when an Hbase application is driven from a Java job. Default is **False**. The **edp-wordcount** example bundled with sahara shows how to use configuration values, arguments, and swift data paths in a Java job type. Note that the example does not use the **edp.java.adapt_for_oozie** option but includes the code to load the configuration values explicitly. Additional Details for Shell jobs +++++++++++++++++++++++++++++++++ A shell job will execute the script specified as ``main``, and will place any files specified as ``libs`` in the same working directory (on both the filesystem and in HDFS). Command line arguments may be passed to the script through the ``args`` array, and any ``params`` values will be passed as environment variables. Data Source objects are not used directly with Shell job types but data source references may be used as described in `Using Data Source References as Arguments`_. The **edp-shell** example bundled with sahara contains a script which will output the executing user to a file specified by the first command line argument. Additional Details for Spark jobs +++++++++++++++++++++++++++++++++ Data Source objects are not used directly with Spark job types. Instead, any input or output paths must be specified as arguments at job launch either explicitly or by reference as described in `Using Data Source References as Arguments`_. Using data source references is the recommended way to pass paths to Spark jobs. Spark jobs use some special configuration values: * ``edp.java.main_class`` (required) Specifies the full name of the class containing the Java or Scala main method: + ``main(String[] args)`` for Java + ``main(args: Array[String]`` for Scala A Spark job will execute the **main** method of the specified main class. Any arguments set during job launch will be passed to the program through the **args** array. * ``edp.spark.adapt_for_swift`` (optional) If set to **True**, instructs sahara to modify the job's Hadoop configuration so that swift paths may be accessed. Without this configuration value, swift paths will not be accessible to Spark jobs. The default is **False**. Despite the name, the same principle applies to jobs which reference paths in S3-like stores. * ``edp.spark.driver.classpath`` (optional) If set to empty string sahara will use default classpath for the cluster during job execution. Otherwise this will override default value for the cluster for particular job execution. The **edp-spark** example bundled with sahara contains a Spark program for estimating Pi. Special Sahara URLs ------------------- Sahara uses custom URLs to refer to objects stored in swift, in manila, in the sahara internal database, or in S3-like storage. These URLs are usually not meant to be used outside of sahara. Sahara swift URLs passed to running jobs as input or output sources include a ".sahara" suffix on the container, for example: ``swift://container.sahara/object`` You may notice these swift URLs in job logs, however, you do not need to add the suffix to the containers yourself. sahara will add the suffix if necessary, so when using the UI or the python client you may write the above URL simply as: ``swift://container/object`` Sahara internal database URLs have the form: ``internal-db://sahara-generated-uuid`` This indicates a file object in the sahara database which has the given uuid as a key. Manila NFS filesystem reference URLS take the form: ``manila://share-uuid/path`` This format should be used when referring to a job binary or a data source stored in a manila NFS share. For both job binaries and data sources, S3 urls take the form: ``s3://bucket/path/to/object`` Despite the above URL format, the current implementation of EDP will still use the Hadoop ``s3a`` driver to access data sources. Botocore is used to access job binaries. EDP Requirements ================ The OpenStack installation and the cluster launched from sahara must meet the following minimum requirements in order for EDP to function: OpenStack Services ------------------ When a Hadoop job is executed, binaries are first uploaded to a cluster node and then moved from the node local filesystem to HDFS. Therefore, there must be an instance of HDFS available to the nodes in the sahara cluster. If the swift service *is not* running in the OpenStack installation: + Job binaries may only be stored in the sahara internal database + Data sources require a long-running HDFS If the swift service *is* running in the OpenStack installation: + Job binaries may be stored in swift or the sahara internal database + Data sources may be in swift or a long-running HDFS Cluster Processes ----------------- Requirements for EDP support depend on the EDP job type and plugin used for the cluster. For example a Vanilla sahara cluster must run at least one instance of these processes to support EDP: * For Hadoop version 1: + jobtracker + namenode + oozie + tasktracker + datanode * For Hadoop version 2: + namenode + datanode + resourcemanager + nodemanager + historyserver + oozie + spark history server EDP Technical Considerations ============================ There are several things in EDP which require attention in order to work properly. They are listed on this page. Transient Clusters ------------------ EDP allows running jobs on transient clusters. In this case the cluster is created specifically for the job and is shut down automatically once the job is finished. Two config parameters control the behaviour of periodic clusters: * periodic_enable - if set to 'false', sahara will do nothing to a transient cluster once the job it was created for is completed. If it is set to 'true', then the behaviour depends on the value of the next parameter. * use_identity_api_v3 - set it to 'false' if your OpenStack installation does not provide keystone API v3. In that case sahara will not terminate unneeded clusters. Instead it will set their state to 'AwaitingTermination' meaning that they could be manually deleted by a user. If the parameter is set to 'true', sahara will itself terminate the cluster. The limitation is caused by lack of 'trusts' feature in Keystone API older than v3. If both parameters are set to 'true', sahara works with transient clusters in the following manner: 1. When a user requests for a job to be executed on a transient cluster, sahara creates such a cluster. 2. Sahara drops the user's credentials once the cluster is created but prior to that it creates a trust allowing it to operate with the cluster instances in the future without user credentials. 3. Once a cluster is not needed, sahara terminates its instances using the stored trust. sahara drops the trust after that. sahara-12.0.0/doc/source/user/building-guest-images/0000775000175000017500000000000013656752227022302 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/user/building-guest-images/sahara-image-create.rst0000664000175000017500000000550413656752032026612 0ustar zuulzuul00000000000000sahara-image-create ------------------- The historical tool for building images, ``sahara-image-create``, is based on `Disk Image Builder `_. `Disk Image Builder` builds disk images using elements. An element is a particular set of code that alters how the image is built, or runs within the chroot to prepare the image. The additional elements required by Sahara images and the ``sahara-image-create`` command itself are stored in the `Sahara image elements repository `_ To create images for a specific plugin follow these steps: 1. Clone repository "https://opendev.org/openstack/sahara-image-elements" locally. 2. Use tox to build images. You can run the command below in sahara-image-elements directory to build images. By default this script will attempt to create cloud images for all versions of supported plugins and all operating systems (subset of Ubuntu, Fedora, and CentOS depending on plugin). .. sourcecode:: console tox -e venv -- sahara-image-create -u If you want to build a image for ```` with ```` on a specific ```` just execute: .. sourcecode:: console tox -e venv -- sahara-image-create -p -v -i Tox will create a virtualenv and install required python packages in it, clone the repositories "https://opendev.org/openstack/diskimage-builder" and "https://opendev.org/openstack/sahara-image-elements" and export necessary parameters. The valid values for the ```` argument are: - Ubuntu (all versions): ``ubuntu`` - CentOS 7: ``centos7`` - Fedora: ``fedora`` ``sahara-image-create`` will then create the required cloud images using image elements that install all the necessary packages and configure them. You will find created images in the parent directory. Variables ~~~~~~~~~ The following environment variables can be used to change the behavior of the image building: * ``JAVA_DOWNLOAD_URL`` - download link for JDK (tarball or bin) * ``DIB_IMAGE_SIZE`` - parameter that specifies a volume of hard disk of instance. You need to specify it only for Fedora because Fedora doesn't use all available volume The following variables can be used to change the name of the output image: * ``centos7_image_name`` * ``ubuntu_image_name`` * ``fedora_image_name`` .. note:: Disk Image Builder will generate QCOW2 images, used with the default OpenStack Qemu/KVM hypervisors. If your OpenStack uses a different hypervisor, the generated image should be converted to an appropriate format. For finer control of ``sahara-image-create`` see the `official documentation `_ sahara-12.0.0/doc/source/user/building-guest-images/sahara-image-pack.rst0000664000175000017500000000766113656752032026273 0ustar zuulzuul00000000000000.. _sahara-image-pack-label: sahara-image-pack ----------------- The CLI command ``sahara-image-pack`` operates in-place on an existing image and installs and configures the software required for the plugin. The script ``sahara-image-pack`` takes the following primary arguments: :: --config-file PATH Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. Defaults to None. --image IMAGE The path to an image to modify. This image will be modified in-place: be sure to target a copy if you wish to maintain a clean master image. --root-filesystem ROOT_FS The filesystem to mount as the root volume on the image. Novalue is required if only one filesystem is detected. --test-only If this flag is set, no changes will be made to the image; instead, the script will fail if discrepancies are found between the image and the intended state. After these arguments, the script takes ``PLUGIN`` and ``VERSION`` arguments. These arguments will allow any plugin and version combination which supports the image packing feature. Plugins may require their own arguments at specific versions; use the ``--help`` feature with ``PLUGIN`` and ``VERSION`` to see the appropriate argument structure. a plausible command-line invocation would be: :: sahara-image-pack --image CentOS.qcow2 \ --config-file etc/sahara/sahara.conf \ cdh 5.7.0 [cdh 5.7.0 specific arguments, if any] This script will modify the target image in-place. Please copy your image if you want a backup or if you wish to create multiple images from a single base image. This CLI will automatically populate the set of available plugins and versions from the plugin set loaded in Sahara, and will show any plugin for which the image packing feature is available. The next sections of this guide will first describe how to modify an image packing specification for one of the plugins, and second, how to enable the image packing feature for new or existing plugins. Note: In case of a RHEL 7 images, it is necessary to register the image before starting to pack it, also enable some required repos. :: virt-customize -v -a $SAHARA_RHEL_IMAGE --sm-register \ --sm-credentials ${REG_USER}:password:${REG_PASSWORD} --sm-attach \ pool:${REG_POOL_ID} --run-command 'subscription-manager repos \ --disable=* --enable=$REPO_A \ --enable=$REPO_B \ --enable=$REPO_C' Installation and developer notes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The script is part of of the Sahara repository, but it does not depend on the Sahara services. In order to use its development version, clone the `Sahara repository `_, check out the branch which matches the Sahara version used, and install the repository in a virtualenv. The script is also provided by binary distributions of OpenStack. For example, RDO ships it in the ``openstack-sahara-image-pack`` package. The script depends on a python library which is not packaged in pip, but is available through yum, dnf, and apt. If you have installed Sahara through yum, dnf, or apt, you should have appropriate dependencies, but if you wish to use the script but are working with Sahara from source, run whichever of the following is appropriate to your OS: :: sudo yum install libguestfs python-libguestfs libguestfs-tools sudo dnf install libguestfs python-libguestfs libguestfs-tools sudo apt-get install libguestfs python-guestfs libguestfs-tools If you are using tox to create virtual environments for your Sahara work, please use the ``images`` environment to run sahara-image-pack. This environment is configured to use system site packages, and will thus be able to find its dependency on python-libguestfs. sahara-12.0.0/doc/source/user/building-guest-images/baremetal.rst0000664000175000017500000000055613656752032024770 0ustar zuulzuul00000000000000.. _building-baremetal-images-label: Bare metal images ----------------- Images that can be used for bare metal deployment through Ironic can be generated using both image building tools: sahara-image-create: pass the -b parameters to the command sahara-image-pack: use `virt-get-kernel` on the generated image to extract the kernel and the initramfs file sahara-12.0.0/doc/source/user/sahara-on-ironic.rst0000664000175000017500000000656413656752032022006 0ustar zuulzuul00000000000000How to run a Sahara cluster on bare metal servers ================================================= Hadoop clusters are designed to store and analyze extremely large amounts of unstructured data in distributed computing environments. Sahara enables you to boot Hadoop clusters in both virtual and bare metal environments. When Booting Hadoop clusters with Sahara on bare metal servers, you benefit from the bare metal performance with self-service resource provisioning. 1. Create a new OpenStack environment using Devstack as described in the :devstack-doc:`Devstack Guide <>` 2. Install Ironic as described in the :ironic-doc:`Ironic Installation Guide ` 3. Install Sahara as described in the `Sahara Installation Guide <../install/installation-guide.html>`_ 4. Build the Sahara image and prepare it for uploading to Glance: - Build an image for Sahara plugin which supports baremetal deployment. Refer to the :ref:`building-baremetal-images-label` section. - Convert the qcow2 image format to the raw format. For example: .. sourcecode:: console $ qemu-img convert -O raw image-converted.qcow image-converted-from-qcow2.raw .. - Mount the raw image to the system. - ``chroot`` to the mounted directory and remove the installed grub. - Build grub2 from sources and install to ``/usr/sbin``. - In ``/etc/sysconfig/selinux``, disable selinux ``SELINUX=disabled`` - In the configuration file, set ``onboot=yes`` and ``BOOTPROTO=dhcp`` for every interface. - Add the configuration files for all interfaces in the ``/etc/sysconfig/network-scripts`` directory. 5. Upload the Sahara disk image to Glance, and register it in the Sahara Image Registry. Referencing its separate kernel and initramfs images. 6. Configure the bare metal network for the Sahara cluster nodes: - Add bare metal servers to your environment manually referencing their IPMI addresses (Ironic does not detect servers), for Ironic to manage the servers power and network. Also, configure the scheduling information and add the required flavors. Please check the :ironic-doc:`Enrollment section of the Ironic installation guide `. 7. Launch your Sahara cluster on Ironic from the cluster template: * Log in to Horizon. * Go to Data Processing > Node Group Templates. * Find the templates that belong to the plugin you would like to use * Update those templates to use 'bare metal' flavor instead of the default one * Go to Data Processing > Cluster Templates. * Click Launch Cluster. * On the Launch Cluster dialog: * Specify the bare metal network for cluster nodes The cluster provisioning time is slower compared to the cluster provisioning of the same size that runs on VMs. Ironic does real hardware reports which is time consuming, and the whole root disk is filled from ``/dev/zero`` for security reasons. Known limitations: ------------------ * Security groups are not applied. * Nodes are not isolated by projects. * VM to Bare Metal network routing is not allowed. * The user has to specify the count of ironic nodes before Devstack deploys an OpenStack. * The user cannot use the same image for several ironic node types. For example, if there are 3 ironic node types, the user has to create 3 images and 3 flavors. * Multiple interfaces on a single node are not supported. Devstack configures only one interface. sahara-12.0.0/doc/source/index.rst0000664000175000017500000000175613656752032017003 0ustar zuulzuul00000000000000Welcome to Sahara! ================== The sahara project aims to provide users with a simple means to provision data processing frameworks (such as Apache Hadoop, Apache Spark and Apache Storm) on OpenStack. This is accomplished by specifying configuration parameters such as the framework version, cluster topology, node hardware details and more. Overview -------- .. toctree:: :maxdepth: 2 intro/index Installation ------------ .. toctree:: :maxdepth: 2 install/index Configuration ------------- .. toctree:: :maxdepth: 2 configuration/index User Guide ---------- .. toctree:: :maxdepth: 2 user/index CLI Guide --------- .. toctree:: :maxdepth: 2 cli/index Operator Documentation ---------------------- .. toctree:: :maxdepth: 2 admin/index Contributor Documentation ------------------------- .. toctree:: :maxdepth: 2 contributor/index Programming Reference --------------------- .. toctree:: :maxdepth: 2 reference/index sahara-12.0.0/doc/source/contributor/0000775000175000017500000000000013656752227017511 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/contributor/jenkins.rst0000664000175000017500000000307513656752032021703 0ustar zuulzuul00000000000000Continuous Integration with Jenkins =================================== Each change made to Sahara core code is tested with unit and integration tests and style checks using flake8. Unit tests and style checks are performed on public `OpenStack Zuul `_ instance. Unit tests are checked using python 2.7. The result of those checks and Unit tests are represented as a vote of +1 or -1 in the *Verify* column in code reviews from the *Jenkins* user. Integration tests check CRUD operations for the Image Registry, Templates, and Clusters. Also a test job is launched on a created Cluster to verify Hadoop work. All integration tests are launched by `Jenkins `_ on the internal Mirantis OpenStack Lab. Jenkins keeps a pool of VMs to run tests in parallel. Even with the pool of VMs integration testing may take a while. Jenkins is controlled for the most part by Zuul which determines what jobs are run when. Zuul status is available at this address: `Zuul Status `_. For more information see: `Sahara Hadoop Cluster CI `_. The integration tests result is represented as a vote of +1 or -1 in the *Verify* column in a code review from the *Sahara Hadoop Cluster CI* user. You can put *sahara-ci-recheck* in comment, if you want to recheck sahara-ci jobs. Also, you can put *recheck* in comment, if you want to recheck both Jenkins and sahara-ci jobs. Finally, you can put *reverify* in a comment, if you only want to recheck Jenkins jobs. sahara-12.0.0/doc/source/contributor/image-gen.rst0000664000175000017500000003073113656752032022072 0ustar zuulzuul00000000000000Image Generation ================ As of Newton, Sahara supports the creation of image generation and image validation tooling as part of the plugin. If implemented properly, this feature will enable your plugin to: * Validate that images passed to it for use in cluster provisioning meet its specifications. * Provision images from "clean" (OS-only) images. * Pack pre-populated images for registration in Glance and use by Sahara. All of these features can use the same image declaration, meaning that logic for these three use cases can be maintained in one place. This guide will explain how to enable this feature for your plugin, as well as how to write or modify the image generation manifests that this feature uses. Image Generation CLI -------------------- The key user-facing interface to this feature is the CLI script ``sahara-image-pack``. This script will be installed with all other Sahara binaries. The usage of the CLI script ``sahara-image-pack`` is documented in the :ref:`sahara-image-pack-label` section of the user guide. The Image Manifest ------------------ As you'll read in the next section, Sahara's image packing tools allow plugin authors to use any toolchain they choose. However, Sahara does provide a built-in image packing framework which is uniquely suited to OpenStack use cases, as it is designed to run the same logic while pre-packing an image or while preparing an instance to launch a cluster after it is spawned in OpenStack. By convention, the image specification, and all the scripts that it calls, should be located in the plugin's resources directory under a subdirectory named "images". A sample specification is below; the example is reasonably silly in practice, and is only designed to highlight the use of the currently available validator types. We'll go through each piece of this specification, but the full sample is presented for context. :: arguments: java-distro: description: The java distribution. default: openjdk required: false choices: - oracle-java - openjdk validators: - os_case: - redhat: - package: nfs-utils - debian: - package: nfs-common - argument_case: argument_name: java-distro cases: openjdk: - any: - all: - package: java-1.8.0-openjdk-devel - argument_set: argument_name: java-version value: 1.8.0 - all: - package: java-1.7.0-openjdk-devel - argument_set: argument_name: java-version value: 1.7.0 oracle-java: - script: install_oracle_java.sh - script: setup_java.sh - package: - hadoop - hadoop-libhdfs - hadoop-native - hadoop-pipes - hadoop-sbin - hadoop-lzo - lzo - lzo-devel - hadoop-lzo-native The Arguments Section --------------------- First, the image specification should describe any arguments that may be used to adjust properties of the image: :: arguments: # The section header - java-distro: # The friendly name of the argument, and the name of the variable passed to scripts description: The java distribution. # A friendly description to be used in help text default: openjdk # A default value for the argument required: false # Whether or not the argument is required choices: # The argument value must match an element of this list - oracle-java - openjdk Specifications may contain any number of arguments, as declared above, by adding more members to the list under the ``arguments`` key. The Validators Section ---------------------- This is where the logical flow of the image packing and validation process is declared. A tiny example validator list is specified below. :: validators: - package: nfs-utils - script: setup_java.sh This is fairly straightforward: this specification will install the nfs-utils package (or check that it's present) and then run the ``setup_java.sh`` script. All validators may be run in two modes: reconcile mode and test-only mode (reconcile == false). If validators are run in reconcile mode, any image or instance state which is not already true will be updated, if possible. If validators are run in test-only mode, they will only test the image or instance, and will raise an error if this fails. We'll now go over the types of validators that are currently available in Sahara. This framework is made to easily allow new validators to be created and old ones to be extended: if there's something you need, please do file a wishlist bug or write and propose your own! Action validators ----------------- These validators take specific, concrete actions to assess or modify your image or instance. The Package Validator ~~~~~~~~~~~~~~~~~~~~~ This validator type will install a package on the image, or validate that a package is installed on the image. It can take several formats, as below: :: validators: - package: hadoop - package: - hadoop-libhdfs - nfs-utils: version: 1.3.3-8 As you can see, a package declaration can consist of: * The package name as a string * A list of packages, any of which may be: * The package name as a string * A dict with the package name as a key and a version property The Script Validator ~~~~~~~~~~~~~~~~~~~~ This validator will run a script on the image. It can take several formats as well: :: validators: - script: simple_script.sh # Runs this file - script: set_java_home: # The name of a script file arguments: # Only the named environment arguments are passed, for clarity - jdk-home - jre-home output: OUTPUT_VAR - script: store_nfs_version: # Because inline is set, this is just a friendly name inline: rpm -q nfs-utils # Runs this text directly, rather than reading a file output: nfs-version # Places the stdout of this script into an argument # for future scripts to consume; if none exists, the # argument is created Two variables are always available to scripts run under this framework: * ``distro``: The distro of the image, in case you want to switch on distro within your script (rather than by using the os_case validator). * ``test_only``: If this value equates to boolean false, then the script should attempt to change the image or instance if it does not already meet the specification. If this equates to boolean true, the script should exit with a failure code if the image or instance does not already meet the specification. Flow Control Validators ----------------------- These validators are used to build more complex logic into your specifications explicitly in the yaml layer, rather than by deferring too much logic to scripts. The OS Case Validator ~~~~~~~~~~~~~~~~~~~~~ This validator runs different logic depending on which distribution of Linux is being used in the guest. :: validators: - os_case: # The contents are expressed as a list, not a dict, to preserve order - fedora: # Only the first match runs, so put distros before families - package: nfs_utils # The content of each case is a list of validators - redhat: # Red Hat distros include fedora, centos, and rhel - package: nfs-utils - debian: # The major supported Debian distro in Sahara is ubuntu - package: nfs-common The Argument Case Validator ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This validator runs different logic depending on the value of an argument. :: validators: - argument_case: argument_name: java-distro # The name of the argument cases: # The cases are expressed as a dict, as only one can equal the argument's value openjdk: - script: setup-openjdk # The content of each case is a list of validators oracle-java: - script: setup-oracle-java The All Validator ~~~~~~~~~~~~~~~~~ This validator runs all the validators within it, as one logical block. If any validators within it fail to validate or modify the image or instance, it will fail. :: validators: - all: - package: nfs-utils - script: setup-nfs.sh The Any Validator ~~~~~~~~~~~~~~~~~ This validator attempts to run each validator within it, until one succeeds, and will report success if any do. If this is run in reconcile mode, it will first try each validator in test-only mode, and will succeed without making changes if any succeed (in the case below, if openjdk 1.7.0 were already installed, the validator would succeed and would not install 1.8.0.) :: validators: - any: # This validator will try to install openjdk-1.8.0, but it will settle for 1.7.0 if that fails - package: java-1.8.0-openjdk-devel - package: java-1.7.0-openjdk-devel The Argument Set Validator ~~~~~~~~~~~~~~~~~~~~~~~~~~ You may find that you wish to store state in one place in the specification for use in another. In this case, you can use this validator to set an argument for future use. :: validators: - argument_set: argument_name: java-version value: 1.7.0 SPI Methods ----------- In order to make this feature available for your plugin, you must implement the following optional plugin SPI methods. When implementing these, you may choose to use your own framework of choice (Packer for image packing, etc.) By doing so, you can ignore the entire framework and specification language described above. However, you may wish to instead use the abstraction we've provided (its ability to keep logic in one place for both image packing and cluster validation is useful in the OpenStack context.) We will, of course, focus on that framework here. :: def get_image_arguments(self, hadoop_version): """Gets the argument set taken by the plugin's image generator""" def pack_image(self, hadoop_version, remote, test_only=False, image_arguments=None): """Packs an image for registration in Glance and use by Sahara""" def validate_images(self, cluster, test_only=False, image_arguments=None): """Validates the image to be used by a cluster""" The validate_images method is called after Heat provisioning of your cluster, but before cluster configuration. If the test_only keyword of this method is set to True, the method should only test the instances without modification. If it is set to False, the method should make any necessary changes (this can be used to allow clusters to be spun up from clean, OS-only images.) This method is expected to use an ssh remote to communicate with instances, as per normal in Sahara. The pack_image method can be used to modify an image file (it is called by the CLI above). This method expects an ImageRemote, which is essentially a libguestfs handle to the disk image file, allowing commands to be run on the image directly (though it could be any concretion that allows commands to be run against the image.) By this means, the validators described above can execute the same logic in the image packing, instance validation, and instance preparation cases with the same degree of interactivity and logical control. In order to future-proof this document against possible changes, the doctext of these methods will not be reproduced here, but they are documented very fully in the sahara.plugins.provisioning abstraction. These abstractions can be found in the module sahara.plugins.images. You will find that the framework has been built with extensibility and abstraction in mind: you can overwrite validator types, add your own without modifying any core sahara modules, declare hierarchies of resource locations for shared resources, and more. These features are documented in the sahara.plugins.images module itself (which has copious doctext,) and we encourage you to explore and ask questions of the community if you are curious or wish to build your own image generation tooling. sahara-12.0.0/doc/source/contributor/log-guidelines.rst0000664000175000017500000000242013656752032023142 0ustar zuulzuul00000000000000 Log Guidelines ============== Levels Guidelines ----------------- During the Kilo release cycle the sahara community defined the following log levels: * Debug: Shows everything and is likely not suitable for normal production operation due to the sheer size of logs generated (e.g. scripts executions, process execution, etc.). * Info: Usually indicates successful service start/stop, versions and such non-error related data. This should include largely positive units of work that are accomplished (e.g. service setup and configuration, cluster start, job execution information). * Warning: Indicates that there might be a systemic issue; potential predictive failure notice (e.g. job execution failed). * Error: An error has occurred and the administrator should research the error information (e.g. cluster failed to start, plugin violations of operation). * Critical: An error has occurred and the system might be unstable, anything that eliminates part of sahara's intended functionalities; immediately get administrator assistance (e.g. failed to access keystone/database, failed to load plugin). Formatting Guidelines --------------------- Sahara uses string formatting defined in `PEP 3101`_ for logs. .. _PEP 3101: https://www.python.org/dev/peps/pep-3101/ sahara-12.0.0/doc/source/contributor/how-to-build-oozie.rst0000664000175000017500000000347413656752032023702 0ustar zuulzuul00000000000000How to build Oozie ================== .. note:: Apache does not make Oozie builds, so it has to be built manually. Download -------- * Download tarball from `Apache mirror `_ * Unpack it with .. sourcecode:: console $ tar -xzvf oozie-4.3.1.tar.gz Hadoop Versions --------------- To build Oozie the following command can be used: .. sourcecode:: console $ {oozie_dir}/bin/mkdistro.sh -DskipTests By default it builds against Hadoop 1.1.1. To built it with Hadoop version 2.x: * The hadoop-2 version should be changed in pom.xml. This can be done manually or with the following command (you should replace 2.x.x with your hadoop version): .. sourcecode:: console $ find . -name pom.xml | xargs sed -ri 's/2.3.0/2.x.x/' * The build command should be launched with the ``-P hadoop-2`` flag JDK Versions ------------ By default, the build configuration enforces that JDK 1.6.* is being used. There are 2 build properties that can be used to change the JDK version requirements: * ``javaVersion`` specifies the version of the JDK used to compile (default 1.6). * ``targetJavaVersion`` specifies the version of the generated bytecode (default 1.6). For example, to specify JDK version 1.7, the build command should contain the ``-D javaVersion=1.7 -D tagetJavaVersion=1.7`` flags. Build ----- To build Oozie with Hadoop 2.6.0 and JDK version 1.7, the following command can be used: .. sourcecode:: console $ {oozie_dir}/bin/mkdistro.sh assembly:single -P hadoop-2 -D javaVersion=1.7 -D targetJavaVersion=1.7 -D skipTests Also, the pig version can be passed as a maven property with the flag ``-D pig.version=x.x.x``. You can find similar instructions to build oozie.tar.gz here: http://oozie.apache.org/docs/4.3.1/DG_QuickStart.html#Building_Oozie sahara-12.0.0/doc/source/contributor/gerrit.rst0000664000175000017500000000101313656752032021524 0ustar zuulzuul00000000000000Code Reviews with Gerrit ======================== Sahara uses the `Gerrit`_ tool to review proposed code changes. The review site is https://review.opendev.org. Gerrit is a complete replacement for Github pull requests. `All Github pull requests to the Sahara repository will be ignored`. See `Development Workflow`_ for information about how to get started using Gerrit. .. _Gerrit: http://code.google.com/p/gerrit .. _Development Workflow: https://docs.openstack.org/infra/manual/developers.html#development-workflow sahara-12.0.0/doc/source/contributor/adding-database-migrations.rst0000664000175000017500000001023613656752032025401 0ustar zuulzuul00000000000000Adding Database Migrations ========================== The migrations in ``sahara/db/migration/alembic_migrations/versions`` contain the changes needed to migrate between Sahara database revisions. A migration occurs by executing a script that details the changes needed to upgrade or downgrade the database. The migration scripts are ordered so that multiple scripts can run sequentially. The scripts are executed by Sahara's migration wrapper which uses the Alembic library to manage the migration. Sahara supports migration from Icehouse or later. Any code modifications that change the structure of the database require a migration script so that previously existing databases will continue to function when the new code is released. This page gives a brief overview of how to add the migration. Generate a New Migration Script +++++++++++++++++++++++++++++++ New migration scripts can be generated using the ``sahara-db-manage`` command. To generate a migration stub to be filled in by the developer:: $ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision" To autogenerate a migration script that reflects the current structure of the database:: $ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision" --autogenerate Each of these commands will create a file of the form ``revision_description`` where ``revision`` is a string generated by Alembic and ``description`` is based on the text passed with the ``-m`` option. Follow the Sahara Naming Convention +++++++++++++++++++++++++++++++++++ By convention Sahara uses 3-digit revision numbers, and this scheme differs from the strings generated by Alembic. Consequently, it's necessary to rename the generated script and modify the revision identifiers in the script. Open the new script and look for the variable ``down_revision``. The value should be a 3-digit numeric string, and it identifies the current revision number of the database. Set the ``revision`` value to the ``down_revision`` value + 1. For example, the lines:: # revision identifiers, used by Alembic. revision = '507eb70202af' down_revision = '006' will become:: # revision identifiers, used by Alembic. revision = '007' down_revision = '006' Modify any comments in the file to match the changes and rename the file to match the new revision number:: $ mv 507eb70202af_my_new_revision.py 007_my_new_revision.py Add Alembic Operations to the Script ++++++++++++++++++++++++++++++++++++ The migration script contains method ``upgrade()``. Sahara has not supported downgrades since the Kilo release. Fill in this method with the appropriate Alembic operations to perform upgrades. In the above example, an upgrade will move from revision '006' to revision '007'. Command Summary for sahara-db-manage ++++++++++++++++++++++++++++++++++++ You can upgrade to the latest database version via:: $ sahara-db-manage --config-file /path/to/sahara.conf upgrade head To check the current database version:: $ sahara-db-manage --config-file /path/to/sahara.conf current To create a script to run the migration offline:: $ sahara-db-manage --config-file /path/to/sahara.conf upgrade head --sql To run the offline migration between specific migration versions:: $ sahara-db-manage --config-file /path/to/sahara.conf upgrade : --sql To upgrade the database incrementally:: $ sahara-db-manage --config-file /path/to/sahara.conf upgrade --delta <# of revs> To create a new revision:: $ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision" --autogenerate To create a blank file:: $ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision" This command does not perform any migrations, it only sets the revision. Revision may be any existing revision. Use this command carefully:: $ sahara-db-manage --config-file /path/to/sahara.conf stamp To verify that the timeline does branch, you can run this command:: $ sahara-db-manage --config-file /path/to/sahara.conf check_migration If the migration path does branch, you can find the branch point via:: $ sahara-db-manage --config-file /path/to/sahara.conf history sahara-12.0.0/doc/source/contributor/development-environment.rst0000664000175000017500000000671713656752032025134 0ustar zuulzuul00000000000000Setting Up a Development Environment ==================================== This page describes how to setup a Sahara development environment by either installing it as a part of DevStack or pointing a local running instance at an external OpenStack. You should be able to debug and test your changes without having to deploy Sahara. Setup a Local Environment with Sahara inside DevStack ----------------------------------------------------- See :doc:`the main article `. Setup a Local Environment with an external OpenStack ---------------------------------------------------- 1. Install prerequisites On OS X Systems: .. sourcecode:: console # we actually need pip, which is part of python package $ brew install python mysql postgresql rabbitmq $ pip install virtualenv tox On Ubuntu: .. sourcecode:: console $ sudo apt-get update $ sudo apt-get install git-core python-dev python-virtualenv gcc libpq-dev libmysqlclient-dev python-pip rabbitmq-server $ sudo pip install tox On Red Hat and related distributions (CentOS/Fedora/RHEL/Scientific Linux): .. sourcecode:: console $ sudo yum install git-core python-devel python-virtualenv gcc python-pip mariadb-devel postgresql-devel erlang $ sudo pip install tox $ sudo wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.2.2/rabbitmq-server-3.2.2-1.noarch.rpm $ sudo rpm --import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc $ sudo yum install rabbitmq-server-3.2.2-1.noarch.rpm On openSUSE-based distributions (SLES 12, openSUSE, Factory or Tumbleweed): .. sourcecode:: console $ sudo zypper in gcc git libmysqlclient-devel postgresql-devel python-devel python-pip python-tox python-virtualenv 2. Grab the code .. sourcecode:: console $ git clone https://opendev.org/openstack/sahara.git $ cd sahara 3. Generate Sahara sample using tox .. sourcecode:: console tox -e genconfig 4. Create config file from the sample .. sourcecode:: console $ cp ./etc/sahara/sahara.conf.sample ./etc/sahara/sahara.conf 5. Look through the sahara.conf and modify parameter values as needed For details see :doc:`Sahara Configuration Guide <../admin/configuration-guide>` 6. Create database schema .. sourcecode:: console $ tox -e venv -- sahara-db-manage --config-file etc/sahara/sahara.conf upgrade head 7. To start Sahara API and Engine processes call .. sourcecode:: console $ tox -e venv -- sahara-api --config-file etc/sahara/sahara.conf --debug $ tox -e venv -- sahara-engine --config-file etc/sahara/sahara.conf --debug Setup local OpenStack dashboard with Sahara plugin -------------------------------------------------- .. toctree:: :maxdepth: 1 dashboard-dev-environment-guide Tips and tricks for dev environment ----------------------------------- 1. Pip speedup Add the following lines to ~/.pip/pip.conf .. sourcecode:: cfg [global] download-cache = /home//.pip/cache index-url = Note that the ``~/.pip/cache`` folder should be created manually. 2. Git hook for fast checks Just add the following lines to .git/hooks/pre-commit and do chmod +x for it. .. sourcecode:: console #!/bin/sh # Run fast checks (PEP8 style check and PyFlakes fast static analysis) tox -epep8 You can add also other checks for pre-push, for example pylint (see below) and tests (tox -epy27). 3. Running static analysis (PyLint) Just run the following command .. sourcecode:: console tox -e pylint sahara-12.0.0/doc/source/contributor/testing.rst0000664000175000017500000000201213656752032021705 0ustar zuulzuul00000000000000Sahara Testing ============== We have a bunch of different tests for Sahara. Unit Tests ++++++++++ In most Sahara sub-repositories we have a directory that contains Python unit tests, located at `_package_/tests/unit` or `_package_/tests`. Scenario integration tests ++++++++++++++++++++++++++ New scenario integration tests were implemented for Sahara. They are available in the sahara-tests repository (https://opendev.org/openstack/sahara-tests). Tempest tests +++++++++++++ Sahara has a Tempest plugin in the sahara-tests repository covering all major API features. Additional tests ++++++++++++++++ Additional tests reside in the sahara-tests repository (as above): * REST API tests checking to ensure that the Sahara REST API works. The only parts that are not tested are cluster creation and EDP. * CLI tests check read-only operations using the Sahara CLI. For more information about these tests, please read `Tempest Integration of Sahara `_. sahara-12.0.0/doc/source/contributor/contributing.rst0000664000175000017500000000477213656752032022756 0ustar zuulzuul00000000000000============================ So You Want to Contribute... ============================ For general information on contributing to OpenStack, please check out the `contributor guide `_ to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. Below will cover the more project specific information you need to get started with Sahara. Communication ~~~~~~~~~~~~~ * If you have something to discuss use `OpenStack development mail-list `_. Prefix the mail subject with ``[sahara]`` * Join ``#openstack-sahara`` IRC channel on `freenode `_ * Attend Sahara team meetings * Weekly on Thursdays at 1400 UTC * IRC channel: ``#openstack-meeting-3`` Contacting the Core Team ~~~~~~~~~~~~~~~~~~~~~~~~ * The core team has coverage in the timezones of Europe and the Americas. * Just pop over to IRC; we keep a close eye on it! * You can also find the email addresses of the core team `here https://review.opendev.org/#/admin/groups/133,members>`. New Feature Planning ~~~~~~~~~~~~~~~~~~~~ Sahara uses specs to track feature requests. They provide a high-level summary of proposed changes and track associated commits. Sahara also uses specs for in-depth descriptions and discussions of blueprints. Specs follow a defined format and are submitted as change requests to the openstack/sahara-specs repository. Task Tracking ~~~~~~~~~~~~~ We track our tasks in Storyboard. The Sahara project group homepage on Storyboard is https://storyboard.openstack.org/#!/project_group/sahara. If you're looking for some smaller, easier work item to pick up and get started on, search for the 'low-hanging-fruit' or 'new-contributor' tag. Reporting a Bug ~~~~~~~~~~~~~~~ You found an issue and want to make sure we are aware of it? You can do so on https://storyboard.openstack.org/#!/project_group/sahara. Getting Your Patch Merged ~~~~~~~~~~~~~~~~~~~~~~~~~ Typically two +2s are required before merging. Project Team Lead Duties ~~~~~~~~~~~~~~~~~~~~~~~~ If you are the PTL of Sahara then you should follow the `PTL guide `_. You should also keep track of new versions of the various Hadoop distros/components coming out (this can also be delegated to another contributor, but the PTL needs to track it either way). sahara-12.0.0/doc/source/contributor/index.rst0000664000175000017500000000072413656752032021347 0ustar zuulzuul00000000000000===================== Developer Information ===================== Programming HowTos and Tutorials ================================ .. toctree:: :maxdepth: 2 development-guidelines development-environment devstack dashboard-dev-environment-guide how-to-build-oozie adding-database-migrations testing log-guidelines apiv2 image-gen Other Resources =============== .. toctree:: :maxdepth: 2 contributing gerrit jenkins sahara-12.0.0/doc/source/contributor/development-guidelines.rst0000664000175000017500000002053313656752032024710 0ustar zuulzuul00000000000000Development Guidelines ====================== Coding Guidelines ----------------- For all the Python code in Sahara we have a rule - it should pass `PEP 8`_. All Bash code should pass `bashate`_. To check your code against PEP 8 and bashate run: .. sourcecode:: console $ tox -e pep8 .. note:: For more details on coding guidelines see file ``HACKING.rst`` in the root of Sahara repo. Static analysis --------------- The static analysis checks are optional in Sahara, but they are still very useful. The gate job will inform you if the number of static analysis warnings has increased after your change. We recommend to always check the static warnings. To run check first commit your change, then execute the following command: .. sourcecode:: console $ tox -e pylint Modification of Upstream Files ------------------------------ We never modify upstream files in Sahara. Any changes in upstream files should be made in the upstream project and then merged back in to Sahara. This includes whitespace changes, comments, and typos. Any change requests containing upstream file modifications are almost certain to receive lots of negative reviews. Be warned. Examples of upstream files are default xml configuration files used to configure Hadoop, or code imported from the OpenStack Oslo project. The xml files will usually be found in ``resource`` directories with an accompanying ``README`` file that identifies where the files came from. For example: .. sourcecode:: console $ pwd /home/me/sahara/sahara/plugins/vanilla/v2_7_1/resources $ ls core-default.xml hdfs-default.xml oozie-default.xml README.rst create_oozie_db.sql mapred-default.xml post_conf.template yarn-default.xml .. Testing Guidelines ------------------ Sahara has a suite of tests that are run on all submitted code, and it is recommended that developers execute the tests themselves to catch regressions early. Developers are also expected to keep the test suite up-to-date with any submitted code changes. Unit tests are located at ``sahara/tests/unit``. Sahara's suite of unit tests can be executed in an isolated environment with `Tox`_. To execute the unit tests run the following from the root of Sahara repo: .. sourcecode:: console $ tox -e py27 Documentation Guidelines ------------------------ All Sahara docs are written using Sphinx / RST and located in the main repo in the ``doc`` directory. You can add or edit pages here to update the https://docs.openstack.org/sahara/latest/ site. The documentation in docstrings should follow the `PEP 257`_ conventions (as mentioned in the `PEP 8`_ guidelines). More specifically: 1. Triple quotes should be used for all docstrings. 2. If the docstring is simple and fits on one line, then just use one line. 3. For docstrings that take multiple lines, there should be a newline after the opening quotes, and before the closing quotes. 4. `Sphinx`_ is used to build documentation, so use the restructured text markup to designate parameters, return values, etc. Run the following command to build docs locally. .. sourcecode:: console $ tox -e docs After it you can access generated docs in ``doc/build/`` directory, for example, main page - ``doc/build/html/index.html``. To make the doc generation process faster you can use: .. sourcecode:: console $ SPHINX_DEBUG=1 tox -e docs To avoid sahara reinstallation to virtual env each time you want to rebuild docs you can use the following command (it can be executed only after running ``tox -e docs`` first time): .. sourcecode:: console $ SPHINX_DEBUG=1 .tox/docs/bin/python setup.py build_sphinx .. note:: For more details on documentation guidelines see HACKING.rst in the root of the Sahara repo. .. _PEP 8: http://www.python.org/dev/peps/pep-0008/ .. _bashate: https://opendev.org/openstack/bashate .. _PEP 257: http://www.python.org/dev/peps/pep-0257/ .. _Tox: http://tox.testrun.org/ .. _Sphinx: http://sphinx.pocoo.org/markup/index.html Event log Guidelines -------------------- Currently Sahara keeps useful information about provisioning for each cluster. Cluster provisioning can be represented as a linear series of provisioning steps, which are executed one after another. Each step may consist of several events. The number of events depends on the step and the number of instances in the cluster. Also each event can contain information about its cluster, instance, and node group. In case of errors, events contain useful information for identifying the error. Additionally, each exception in sahara contains a unique identifier that allows the user to find extra information about that error in the sahara logs. You can see an example of provisioning progress information here: https://docs.openstack.org/api-ref/data-processing/#event-log This means that if you add some important phase for cluster provisioning to the sahara code, it's recommended to add a new provisioning step for this phase. This will allow users to use event log for handling errors during this phase. Sahara already has special utils for operating provisioning steps and events in the module ``sahara/utils/cluster_progress_ops.py``. .. note:: It's strictly recommended not to use ``conductor`` event log ops directly to assign events and operate provisioning steps. .. note:: You should not start a new provisioning step until the previous step has successfully completed. .. note:: It's strictly recommended to use ``event_wrapper`` for event handling. OpenStack client usage guidelines --------------------------------- The sahara project uses several OpenStack clients internally. These clients are all wrapped by utility functions which make using them more convenient. When developing sahara, if you need to use an OpenStack client you should check the ``sahara.utils.openstack`` package for the appropriate one. When developing new OpenStack client interactions in sahara, it is important to understand the ``sahara.service.sessions`` package and the usage of the keystone ``Session`` and auth plugin objects (for example, ``Token`` and ``Password``). Sahara is migrating all clients to use this authentication methodology, where available. For more information on using sessions with keystone, please see :keystoneauth-doc:`the keystoneauth documentation ` Storing sensitive information ----------------------------- During the course of development, there is often cause to store sensitive information (for example, login credentials) in the records for a cluster, job, or some other record. Storing secret information this way is **not** safe. To mitigate the risk of storing this information, sahara provides access to the OpenStack Key Manager service (implemented by the :barbican-doc:`barbican project <>`) through the :castellan-doc:`castellan library <>`. To utilize the external key manager, the functions in ``sahara.service.castellan.utils`` are provided as wrappers around the castellan library. These functions allow a developer to store, retrieve, and delete secrets from the manager. Secrets that are managed through the key manager have an identifier associated with them. These identifiers are considered safe to store in the database. The following are some examples of working with secrets in the sahara codebase. These examples are considered basic, any developer wishing to learn more about the advanced features of storing secrets should look to the code and docstrings contained in the ``sahara.service.castellan`` module. **Storing a secret** .. sourcecode:: python from sahara.service.castellan import utils as key_manager password = 'SooperSecretPassword' identifier = key_manager.store_secret(password) **Retrieving a secret** .. sourcecode:: python from sahara.service.castellan import utils as key_manager password = key_manager.get_secret(identifier) **Deleting a secret** .. sourcecode:: python from sahara.service.castellan import utils as key_manager key_manager.delete_secret(identifier) When storing secrets through this interface it is important to remember that if an external key manager is being used, each stored secret creates an entry in an external service. When you are finished using the secret it is good practice to delete it, as not doing so may leave artifacts in those external services. For more information on configuring sahara to use the OpenStack Key Manager service, see :ref:`external_key_manager_usage`. sahara-12.0.0/doc/source/contributor/devstack.rst0000664000175000017500000001340213656752032022041 0ustar zuulzuul00000000000000Setup DevStack ============== DevStack can be installed on Fedora, Ubuntu, and CentOS. For supported versions see `DevStack documentation `_ We recommend that you install DevStack in a VM, rather than on your main system. That way you may avoid contamination of your system. You may find hypervisor and VM requirements in the next section. If you still want to install DevStack on your baremetal system, just skip the next section and read further. Start VM and set up OS ---------------------- In order to run DevStack in a local VM, you need to start by installing a guest with Ubuntu 14.04 server. Download an image file from `Ubuntu's web site `_ and create a new guest from it. Virtualization solution must support nested virtualization. Without nested virtualization VMs running inside the DevStack will be extremely slow lacking hardware acceleration, i.e. you will run QEMU VMs without KVM. On Linux QEMU/KVM supports nested virtualization, on Mac OS - VMware Fusion. VMware Fusion requires adjustments to run VM with fixed IP. You may find instructions which can help :ref:`below `. Start a new VM with Ubuntu Server 14.04. Recommended settings: - Processor - at least 2 cores - Memory - at least 8GB - Hard Drive - at least 60GB When allocating CPUs and RAM to the DevStack, assess how big clusters you want to run. A single Hadoop VM needs at least 1 cpu and 1G of RAM to run. While it is possible for several VMs to share a single cpu core, remember that they can't share the RAM. After you installed the VM, connect to it via SSH and proceed with the instructions below. Install DevStack ---------------- The instructions assume that you've decided to install DevStack into Ubuntu 14.04 system. **Note:** Make sure to use bash, as other shells are not fully compatible and may cause hard to debug problems. 1. Clone DevStack: .. sourcecode:: console $ sudo apt-get install git-core $ git clone https://opendev.org/openstack/devstack.git 2. Create the file ``local.conf`` in devstack directory with the following content: .. sourcecode:: bash [[local|localrc]] ADMIN_PASSWORD=nova MYSQL_PASSWORD=nova RABBIT_PASSWORD=nova SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=nova # Enable Swift enable_service s-proxy s-object s-container s-account SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_REPLICAS=1 SWIFT_DATA_DIR=$DEST/data # Force checkout prerequisites # FORCE_PREREQ=1 # keystone is now configured by default to use PKI as the token format # which produces huge tokens. # set UUID as keystone token format which is much shorter and easier to # work with. KEYSTONE_TOKEN_FORMAT=UUID # Change the FLOATING_RANGE to whatever IPs VM is working in. # In NAT mode it is the subnet VMware Fusion provides, in bridged mode # it is your local network. But only use the top end of the network by # using a /27 and starting at the 224 octet. FLOATING_RANGE=192.168.55.224/27 # Set ``OFFLINE`` to ``True`` to configure ``stack.sh`` to run cleanly # without Internet access. ``stack.sh`` must have been previously run # with Internet access to install prerequisites and fetch repositories. # OFFLINE=True # Enable sahara enable_plugin sahara https://opendev.org/openstack/sahara # Enable heat enable_plugin heat https://opendev.org/openstack/heat In cases where you need to specify a git refspec (branch, tag, or commit hash) for the sahara in-tree devstack plugin (or sahara repo), it should be appended to the git repo URL as follows: .. sourcecode:: bash enable_plugin sahara https://opendev.org/openstack/sahara 3. Sahara can send notifications to Ceilometer, if Ceilometer is enabled. If you want to enable Ceilometer add the following lines to the ``local.conf`` file: .. sourcecode:: bash enable_plugin ceilometer https://opendev.org/openstack/ceilometer 4. Start DevStack: .. sourcecode:: console $ ./stack.sh 5. Once the previous step is finished Devstack will print a Horizon URL. Navigate to this URL and login with login "admin" and password from ``local.conf``. 6. Congratulations! You have OpenStack running in your VM and you're ready to launch VMs inside that VM. :) Managing sahara in DevStack --------------------------- If you install DevStack with sahara included you can rejoin screen with the ``screen -c stack-screenrc`` command and switch to the ``sahara`` tab. Here you can manage the sahara service as other OpenStack services. Sahara source code is located at ``$DEST/sahara`` which is usually ``/opt/stack/sahara``. .. _fusion-fixed-ip: Setting fixed IP address for VMware Fusion VM --------------------------------------------- 1. Open file ``/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf`` 2. There is a block named "subnet". It might look like this: .. sourcecode:: text subnet 192.168.55.0 netmask 255.255.255.0 { range 192.168.55.128 192.168.55.254; 3. You need to pick an IP address outside of that range. For example - ``192.168.55.20`` 4. Copy VM MAC address from VM settings->Network->Advanced 5. Append the following block to file ``dhcpd.conf`` (don't forget to replace ``VM_HOSTNAME`` and ``VM_MAC_ADDRESS`` with actual values): .. sourcecode:: text host VM_HOSTNAME { hardware ethernet VM_MAC_ADDRESS; fixed-address 192.168.55.20; } 6. Now quit all the VMware Fusion applications and restart vmnet: .. sourcecode:: console $ sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop $ sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start 7. Now start your VM; it should have new fixed IP address. sahara-12.0.0/doc/source/contributor/dashboard-dev-environment-guide.rst0000664000175000017500000000753013656752032026402 0ustar zuulzuul00000000000000Sahara UI Dev Environment Setup =============================== This page describes how to setup Horizon for developing Sahara by either installing it as part of DevStack with Sahara or installing it in an isolated environment and running from the command line. Install as a part of DevStack ----------------------------- See the `DevStack guide `_ for more information on installing and configuring DevStack with Sahara. Sahara UI can be installed as a DevStack plugin by adding the following line to your ``local.conf`` file .. sourcecode:: bash # Enable sahara-dashboard enable_plugin sahara-dashboard https://opendev.org/openstack/sahara-dashboard Isolated Dashboard for Sahara ----------------------------- These installation steps serve two purposes: 1. Setup a dev environment 2. Setup an isolated Dashboard for Sahara **Note** The host where you are going to perform installation has to be able to connect to all OpenStack endpoints. You can list all available endpoints using the following command: .. sourcecode:: console $ openstack endpoint list You can list the registered services with this command: .. sourcecode:: console $ openstack service list Sahara service should be present in keystone service list with service type *data-processing* 1. Install prerequisites .. sourcecode:: console $ sudo apt-get update $ sudo apt-get install git-core python-dev gcc python-setuptools \ python-virtualenv node-less libssl-dev libffi-dev libxslt-dev .. On Ubuntu 12.10 and higher you have to install the following lib as well: .. sourcecode:: console $ sudo apt-get install nodejs-legacy .. 2. Checkout Horizon from git and switch to your version of OpenStack Here is an example: .. sourcecode:: console $ git clone https://opendev.org/openstack/horizon/ {HORIZON_DIR} .. Then install the virtual environment: .. sourcecode:: console $ python {HORIZON_DIR}/tools/install_venv.py .. 3. Create a ``local_settings.py`` file .. sourcecode:: console $ cp {HORIZON_DIR}/openstack_dashboard/local/local_settings.py.example \ {HORIZON_DIR}/openstack_dashboard/local/local_settings.py .. 4. Modify ``{HORIZON_DIR}/openstack_dashboard/local/local_settings.py`` Set the proper values for host and url variables: .. sourcecode:: python OPENSTACK_HOST = "ip of your controller" .. If you wish to disable floating IP options during node group template creation, add the following parameter: .. sourcecode:: python SAHARA_FLOATING_IP_DISABLED = True .. 5. Clone sahara-dashboard repository and checkout the desired branch .. sourcecode:: console $ git clone https://opendev.org/openstack/sahara-dashboard/ \ {SAHARA_DASHBOARD_DIR} .. 6. Copy plugin-enabling files from sahara-dashboard repository to horizon .. sourcecode:: console $ cp -a {SAHARA_DASHBOARD_DIR}/sahara_dashboard/enabled/* {HORIZON_DIR}/openstack_dashboard/local/enabled/ .. 7. Install sahara-dashboard project into your horizon virtualenv in editable mode .. sourcecode:: console $ . {HORIZON_DIR}/.venv/bin/activate $ pip install -e {SAHARA_DASHBOARD_DIR} .. 8. Start Horizon .. sourcecode:: console $ . {HORIZON_DIR}/.venv/bin/activate $ python {HORIZON_DIR}/manage.py runserver 0.0.0.0:8080 .. This will start Horizon in debug mode. That means the logs will be written to console and if any exceptions happen, you will see the stack-trace rendered as a web-page. Debug mode can be disabled by changing ``DEBUG=True`` to ``False`` in ``local_settings.py``. In that case Horizon should be started slightly differently, otherwise it will not serve static files: .. sourcecode:: console $ . {HORIZON_DIR}/.venv/bin/activate $ python {HORIZON_DIR}/manage.py runserver --insecure 0.0.0.0:8080 .. .. note:: It is not recommended to use Horizon in this mode for production. sahara-12.0.0/doc/source/contributor/apiv2.rst0000664000175000017500000001001113656752032021247 0ustar zuulzuul00000000000000API Version 2 Development ========================= The sahara project is currently in the process of creating a new RESTful application programming interface (API). This interface is by-default enabled, although it remains experimental. This document defines the steps necessary to enable and communicate with the new API. This API has a few fundamental changes from the previous APIs and they should be noted before proceeding with development work. .. warning:: This API is currently marked as experimental. It is not supported by the sahara python client. These instructions are included purely for developers who wish to help participate in the development effort. Enabling the experimental API ----------------------------- There are a few changes to the WSGI pipeline that must be made to enable the new v2 API. These changes will leave the 1.0 and 1.1 API versions in place and will not adjust their communication parameters. To begin, uncomment, or add, the following sections in your api-paste.ini file: .. sourcecode:: ini [app:sahara_apiv2] paste.app_factory = sahara.api.middleware.sahara_middleware:RouterV2.factory [filter:auth_validator_v2] paste.filter_factory = sahara.api.middleware.auth_valid:AuthValidatorV2.factory These lines define a new authentication filter for the v2 API, and define the application that will handle the new calls. With these new entries in the paste configuration, we can now enable them with the following changes to the api-paste.ini file: .. sourcecode:: ini [pipeline:sahara] pipeline = cors request_id acl auth_validator_v2 sahara_api [composite:sahara_api] use = egg:Paste#urlmap /: sahara_apiv2 There are 2 significant changes occurring here; changing the authentication validator in the pipeline, and changing the root "/" application to the new v2 handler. At this point the sahara API server should be configured to accept requests on the new v2 endpoints. Communicating with the v2 API ----------------------------- The v2 API makes at least one major change from the previous versions, removing the OpenStack project identifier from the URL. Now users of the API do not provide their project ID explictly; instead we fully trust keystonemiddeware to provide it in the WSGI environment based on the given user token. For example, in previous versions of the API, a call to get the list of clusters for project "12345678-1234-1234-1234-123456789ABC" would have been made as follows:: GET /v1.1/12345678-1234-1234-1234-123456789ABC/clusters X-Auth-Token: {valid auth token} This call would now be made to the following URL:: GET /v2/clusters X-Auth-Token: {valid auth token} Using a tool like `HTTPie `_, the same request could be made like this:: $ httpie http://{sahara service ip:port}/v2/clusters \ X-Auth-Token:{valid auth token} Following the implementation progress ------------------------------------- As the creation of this API will be under regular change until it moves out of the experimental phase, a wiki page has been established to help track the progress. https://wiki.openstack.org/wiki/Sahara/api-v2 This page will help to coordinate the various reviews, specs, and work items that are a continuing facet of this work. The API service layer --------------------- When contributing to the version 2 API, it will be necessary to add code that modifies the data and behavior of HTTP calls as they are sent to and from the processing engine and data abstraction layers. Most frequently in the sahara codebase, these interactions are handled in the modules of the ``sahara.service.api`` package. This package contains code for all versions of the API and follows a namespace mapping that is similar to the routing functions of ``sahara.api`` Although these modules are not the definitive end of all answers to API related code questions, they are a solid starting point when examining the extent of new work. Furthermore, they serve as a central point to begin API debugging efforts when the need arises. sahara-12.0.0/doc/source/admin/0000775000175000017500000000000013656752227016227 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/admin/upgrade-guide.rst0000664000175000017500000001425613656752032021505 0ustar zuulzuul00000000000000Sahara Upgrade Guide ==================== This page contains details about upgrading sahara between releases such as configuration file updates, database migrations, and architectural changes. Icehouse -> Juno ---------------- Main binary renamed to sahara-all +++++++++++++++++++++++++++++++++ The All-In-One sahara binary has been renamed from ``sahara-api`` to ``sahara-all``. The new name should be used in all cases where the All-In-One sahara is desired. Authentication middleware changes +++++++++++++++++++++++++++++++++ The custom auth_token middleware has been deprecated in favor of the keystone middleware. This change requires an update to the sahara configuration file. To update your configuration file you should replace the following parameters from the ``[DEFAULT]`` section with the new parameters in the ``[keystone_authtoken]`` section: +-----------------------+--------------------+ | Old parameter name | New parameter name | +=======================+====================+ | os_admin_username | admin_user | +-----------------------+--------------------+ | os_admin_password | admin_password | +-----------------------+--------------------+ | os_admin_tenant_name | admin_tenant_name | +-----------------------+--------------------+ Additionally, the parameters ``os_auth_protocol``, ``os_auth_host``, and ``os_auth_port`` have been combined to create the ``auth_uri`` and ``identity_uri`` parameters. These new parameters should be full URIs to the keystone public and admin endpoints, respectively. For more information about these configuration parameters please see the :doc:`../admin/configuration-guide`. Database package changes ++++++++++++++++++++++++ The oslo based code from sahara.openstack.common.db has been replaced by the usage of the oslo.db package. This change does not require any update to sahara's configuration file. Additionally, the usage of SQLite databases has been deprecated. Please use MySQL or PostgreSQL databases for sahara. SQLite has been deprecated because it does not, and is not going to, support the ``ALTER COLUMN`` and ``DROP COLUMN`` commands required for migrations between versions. For more information please see http://www.sqlite.org/omitted.html Sahara integration into OpenStack Dashboard +++++++++++++++++++++++++++++++++++++++++++ The sahara dashboard package has been deprecated in the Juno release. The functionality of the dashboard has been fully incorporated into the OpenStack Dashboard. The sahara interface is available under the "Project" -> "Data Processing" tab. The Data processing service endpoints must be registered in the Identity service catalog for the Dashboard to properly recognize and display those user interface components. For more details on this process please see :ref:`registering Sahara in installation guide `. The `sahara-dashboard `_ project is now used solely to host sahara user interface integration tests. Virtual machine user name changes +++++++++++++++++++++++++++++++++ The HEAT infrastructure engine has been updated to use the same rules for instance user names as the direct engine. In previous releases the user name for instances created by sahara using HEAT was always 'ec2-user'. As of Juno, the user name is taken from the image registry as described in the :doc:`../user/registering-image` document. This change breaks backward compatibility for clusters created using the HEAT infrastructure engine prior to the Juno release. Clusters will continue to operate, but we do not recommended using the scaling operations with them. Anti affinity implementation changed ++++++++++++++++++++++++++++++++++++ Starting with the Juno release the anti affinity feature is implemented using server groups. From the user perspective there will be no noticeable changes with this feature. Internally this change has introduced the following behavior: 1) Server group objects will be created for any clusters with anti affinity enabled. 2) Affected instances on the same host will not be allowed even if they do not have common processes. Prior to Juno, instances with differing processes were allowed on the same host. The new implementation guarantees that all affected instances will be on different hosts regardless of their processes. The new anti affinity implementation will only be applied for new clusters. Clusters created with previous versions will continue to operate under the older implementation, this applies to scaling operations on these clusters as well. Juno -> Kilo ------------ Sahara requires policy configuration ++++++++++++++++++++++++++++++++++++ Sahara now requires a policy configuration file. The ``policy.json`` file should be placed in the same directory as the sahara configuration file or specified using the ``policy_file`` parameter. For more details about the policy file please see the :ref:`policy section in the configuration guide `. Kilo -> Liberty --------------- Direct engine deprecation +++++++++++++++++++++++++ In the Liberty release the direct infrastructure engine has been deprecated and the heat infrastructure engine is now default. This means, that it is preferable to use heat engine instead now. In the Liberty release you can continue to operate clusters with the direct engine (create, delete, scale). Using heat engine only the delete operation is available on clusters that were created by the direct engine. After the Liberty release the direct engine will be removed, this means that you will only be able to delete clusters created with the direct engine. Policy namespace changed (policy.json) ++++++++++++++++++++++++++++++++++++++ The "data-processing:" namespace has been added to the beginning of the all Sahara's policy based actions, so, you need to update the policy.json file by prepending all actions with "data-processing:". Liberty -> Mitaka ----------------- Direct engine is removed. Mitaka -> Newton ---------------- Sahara CLI command is deprecated, please use OpenStack Client. .. note:: Since Mitaka release sahara actively uses release notes so you can see all required upgrade actions here: https://docs.openstack.org/releasenotes/sahara/ sahara-12.0.0/doc/source/admin/configuration-guide.rst0000664000175000017500000001527513656752032022727 0ustar zuulzuul00000000000000Sahara Configuration Guide ========================== This guide covers the steps for a basic configuration of sahara. It will help you to configure the service in the most simple manner. Basic configuration ------------------- A full configuration file showing all possible configuration options and their defaults can be generated with the following command: .. sourcecode:: cfg $ tox -e genconfig Running this command will create a file named ``sahara.conf.sample`` in the ``etc/sahara`` directory of the project. After creating a configuration file by either generating one or starting with an empty file, edit the ``connection`` parameter in the ``[database]`` section. The URL provided here should point to an empty database. For example, the connection string for a MySQL database will be: .. sourcecode:: cfg connection=mysql+pymsql://username:password@host:port/database Next you will configure the Identity service parameters in the ``[keystone_authtoken]`` section. The ``www_authenticate_uri`` parameter should point to the public Identity API endpoint. The ``auth_url`` should point to the internal Identity API endpoint. For example: .. sourcecode:: cfg www_authenticate_uri=http://127.0.0.1:5000/v3/ auth_url=http://127.0.0.1:5000/v3/ Specify the ``username``, ``user_domain_name``, ``password``, ``project_name``. and ``project_domain_name``. These parameters must specify an Identity user who has the ``admin`` role in the given project. These credentials allow sahara to authenticate and authorize its users. Next you will configure the default Networking service. If using neutron for networking the following parameter should be set in the ``[DEFAULT]`` section: With these parameters set, sahara is ready to run. By default the sahara's log level is set to INFO. If you wish to increase the logging levels for troubleshooting, set ``debug`` to ``true`` in the ``[DEFAULT]`` section of the configuration file. Networking configuration ------------------------ By default sahara is configured to use the neutron. Additionally, if the cluster supports network namespaces the ``use_namespaces`` property can be used to enable their usage. .. sourcecode:: cfg [DEFAULT] use_namespaces=True .. note:: If a user other than ``root`` will be running the Sahara server instance and namespaces are used, some additional configuration is required, please see :ref:`non-root-users` for more information. .. _floating_ip_management: Floating IP management ++++++++++++++++++++++ During cluster setup sahara must access instances through a secure shell (SSH). To establish this connection it may use either the fixed or floating IP address of an instance. By default sahara is configured to use floating IP addresses for access. This is controlled by the ``use_floating_ips`` configuration parameter. With this setup the user has two options for ensuring that the instances in the node groups templates that requires floating IPs gain a floating IP address: * The user may specify a floating IP address pool for each node group that requires floating IPs directly. From Newton changes were made to allow the coexistence of clusters using floating IPs and clusters using fixed IPs. If ``use_floating_ips`` is True it means that the floating IPs can be used by Sahara to spawn clusters. But, differently from previous versions, this does not mean that all instances in the cluster must have floating IPs and that all clusters must use floating IPs. It is possible in a single Sahara deploy to have clusters setup using fixed IPs, clusters using floating IPs and cluster that use both. If not using floating IP addresses (``use_floating_ips=False``) sahara will use fixed IP addresses for instance management. When using neutron for the Networking service the user will be able to choose the fixed IP network for all instances in a cluster. .. _notification-configuration: Notifications configuration --------------------------- Sahara can be configured to send notifications to the OpenStack Telemetry module. To enable this functionality the following parameter ``enable`` should be set in the ``[oslo_messaging_notifications]`` section of the configuration file: .. sourcecode:: cfg [oslo_messaging_notifications] enable = true And the following parameter ``driver`` should be set in the ``[oslo_messaging_notifications]`` section of the configuration file: .. sourcecode:: cfg [oslo_messaging_notifications] driver = messaging By default sahara is configured to use RabbitMQ as its message broker. If you are using RabbitMQ as the message broker, then you should set the following parameter in the ``[DEFAULT]`` section: .. sourcecode:: cfg rpc_backend = rabbit You may also need to specify the connection parameters for your RabbitMQ installation. The following example shows the default values in the ``[oslo_messaging_rabbit]`` section which may need adjustment: .. sourcecode:: cfg rabbit_host=localhost rabbit_port=5672 rabbit_hosts=$rabbit_host:$rabbit_port rabbit_userid=guest rabbit_password=guest rabbit_virtual_host=/ .. .. _orchestration-configuration: Orchestration configuration --------------------------- By default sahara is configured to use the heat engine for instance creation. The heat engine uses the OpenStack Orchestration service to provision instances. This engine makes calls directly to the services required for instance provisioning. .. _policy-configuration-label: Policy configuration -------------------- Sahara's public API calls may be restricted to certain sets of users by using a policy configuration file. The location of the policy file(s) is controlled by the ``policy_file`` and ``policy_dirs`` parameters in the ``[oslo_policy]`` section. By default sahara will search for a ``policy.json`` file in the same directory as the ``sahara.conf`` configuration file. Examples ++++++++ Example 1. Allow all method to all users (default policy). .. sourcecode:: json { "default": "" } Example 2. Disallow image registry manipulations to non-admin users. .. sourcecode:: json { "default": "", "data-processing:images:register": "role:admin", "data-processing:images:unregister": "role:admin", "data-processing:images:add_tags": "role:admin", "data-processing:images:remove_tags": "role:admin" } API configuration ----------------- Sahara uses the ``api-paste.ini`` file to configure the data processing API service. For middleware injection sahara uses pastedeploy library. The location of the api-paste file is controlled by the ``api_paste_config`` parameter in the ``[default]`` section. By default sahara will search for a ``api-paste.ini`` file in the same directory as the configuration file. sahara-12.0.0/doc/source/admin/index.rst0000664000175000017500000000025413656752032020063 0ustar zuulzuul00000000000000====================== Operator Documentation ====================== .. toctree:: :maxdepth: 2 configuration-guide advanced-configuration-guide upgrade-guide sahara-12.0.0/doc/source/admin/advanced-configuration-guide.rst0000664000175000017500000006306013656752032024465 0ustar zuulzuul00000000000000Sahara Advanced Configuration Guide =================================== This guide addresses specific aspects of Sahara configuration that pertain to advanced usage. It is divided into sections about various features that can be utilized, and their related configurations. .. _custom_network_topologies: Custom network topologies ------------------------- Sahara accesses instances at several stages of cluster spawning through SSH and HTTP. Floating IPs and network namespaces will be automatically used for access when present. When floating IPs are not assigned to instances and namespaces are not being used, sahara will need an alternative method to reach them. The ``proxy_command`` parameter of the configuration file can be used to give sahara a command to access instances. This command is run on the sahara host and must open a netcat socket to the instance destination port. The ``{host}`` and ``{port}`` keywords should be used to describe the destination, they will be substituted at runtime. Other keywords that can be used are: ``{tenant_id}``, ``{network_id}`` and ``{router_id}``. Additionally, if ``proxy_command_use_internal_ip`` is set to ``True``, then the internal IP will be substituted for ``{host}`` in the command. Otherwise (if ``False``, by default) the management IP will be used: this corresponds to floating IP if present in the relevant node group, else the internal IP. The option is ignored if ``proxy_command`` is not also set. For example, the following parameter in the sahara configuration file would be used if instances are accessed through a relay machine: .. sourcecode:: cfg [DEFAULT] proxy_command='ssh relay-machine-{tenant_id} nc {host} {port}' Whereas the following shows an example of accessing instances though a custom network namespace: .. sourcecode:: cfg [DEFAULT] proxy_command='ip netns exec ns_for_{network_id} nc {host} {port}' .. _dns_hostname_resolution: DNS Hostname Resolution ----------------------- Sahara can resolve hostnames of cluster instances by using DNS. For this Sahara uses Designate. With this feature, for each instance of the cluster Sahara will create two ``A`` records (for internal and external ips) under one hostname and one ``PTR`` record. Also all links in the Sahara dashboard will be displayed as hostnames instead of just ip addresses. You should configure DNS server with Designate. Designate service should be properly installed and registered in Keystone catalog. The detailed instructions about Designate configuration can be found here: :designate-doc:`Designate manual installation ` and here: :neutron-doc:`Configuring OpenStack Networking with Designate `. Also if you use devstack you can just enable the :designate-doc:`Designate devstack plugin `. When Designate is configured you should create domain(s) for hostname resolution. This can be done by using the Designate dashboard or by CLI. Also you have to create ``in-addr.arpa.`` domain for reverse hostname resolution because some plugins (e.g. ``HDP``) determine hostname by ip. Sahara also should be properly configured. In ``sahara.conf`` you must specify two config properties: .. sourcecode:: cfg [DEFAULT] # Use Designate for internal and external hostnames resolution: use_designate=true # IP addresses of Designate nameservers: nameservers=1.1.1.1,2.2.2.2 An OpenStack operator should properly configure the network. It must enable DHCP and specify DNS server ip addresses (e.g. 1.1.1.1 and 2.2.2.2) in ``DNS Name Servers`` field in the ``Subnet Details``. If the subnet already exists and changing it or creating new one is impossible then Sahara will manually change ``/etc/resolv.conf`` file on every instance of the cluster (if ``nameservers`` list has been specified in ``sahara.conf``). In this case, though, Sahara cannot guarantee that these changes will not be overwritten by DHCP or other services of the existing network. Sahara has a health check for track this situation (and if it occurs the health status will be red). In order to resolve hostnames from your local machine you should properly change your ``/etc/resolv.conf`` file by adding appropriate ip addresses of DNS servers (e.g. 1.1.1.1 and 2.2.2.2). Also the VMs with DNS servers should be available from your local machine. .. _data_locality_configuration: Data-locality configuration --------------------------- Hadoop provides the data-locality feature to enable task tracker and data nodes the capability of spawning on the same rack, Compute node, or virtual machine. Sahara exposes this functionality to the user through a few configuration parameters and user defined topology files. To enable data-locality, set the ``enable_data_locality`` parameter to ``true`` in the sahara configuration file .. sourcecode:: cfg [DEFAULT] enable_data_locality=true With data locality enabled, you must now specify the topology files for the Compute and Object Storage services. These files are specified in the sahara configuration file as follows: .. sourcecode:: cfg [DEFAULT] compute_topology_file=/etc/sahara/compute.topology swift_topology_file=/etc/sahara/swift.topology The ``compute_topology_file`` should contain mappings between Compute nodes and racks in the following format: .. sourcecode:: cfg compute1 /rack1 compute2 /rack2 compute3 /rack2 Note that the Compute node names must be exactly the same as configured in OpenStack (``host`` column in admin list for instances). The ``swift_topology_file`` should contain mappings between Object Storage nodes and racks in the following format: .. sourcecode:: cfg node1 /rack1 node2 /rack2 node3 /rack2 Note that the Object Storage node names must be exactly the same as configured in the object ring. Also, you should ensure that instances with the task tracker process have direct access to the Object Storage nodes. Hadoop versions after 1.2.0 support four-layer topology (for more detail please see `HADOOP-8468 JIRA issue`_). To enable this feature set the ``enable_hypervisor_awareness`` parameter to ``true`` in the configuration file. In this case sahara will add the Compute node ID as a second level of topology for virtual machines. .. _HADOOP-8468 JIRA issue: https://issues.apache.org/jira/browse/HADOOP-8468 .. _distributed-mode-configuration: Distributed mode configuration ------------------------------ Sahara can be configured to run in a distributed mode that creates a separation between the API and engine processes. This allows the API process to remain relatively free to handle requests while offloading intensive tasks to the engine processes. The ``sahara-api`` application works as a front-end and serves user requests. It offloads 'heavy' tasks to the ``sahara-engine`` process via RPC mechanisms. While the ``sahara-engine`` process could be loaded with tasks, ``sahara-api`` stays free and hence may quickly respond to user queries. If sahara runs on several hosts, the API requests could be balanced between several ``sahara-api`` hosts using a load balancer. It is not required to balance load between different ``sahara-engine`` hosts as this will be automatically done via the message broker. If a single host becomes unavailable, other hosts will continue serving user requests. Hence, a better scalability is achieved and some fault tolerance as well. Note that distributed mode is not a true high availability. While the failure of a single host does not affect the work of the others, all of the operations running on the failed host will stop. For example, if a cluster scaling is interrupted, the cluster will be stuck in a half-scaled state. The cluster might continue working, but it will be impossible to scale it further or run jobs on it via EDP. To run sahara in distributed mode pick several hosts on which you want to run sahara services and follow these steps: * On each host install and configure sahara using the `installation guide <../install/installation-guide.html>`_ except: * Do not run ``sahara-db-manage`` or launch sahara with ``sahara-all`` * Ensure that each configuration file provides a database connection string to a single database for all hosts. * Run ``sahara-db-manage`` as described in the installation guide, but only on a single (arbitrarily picked) host. * The ``sahara-api`` and ``sahara-engine`` processes use oslo.messaging to communicate with each other. You will need to configure it properly on each host (see below). * Run ``sahara-api`` and ``sahara-engine`` on the desired hosts. You may run both processes on the same or separate hosts as long as they are configured to use the same message broker and database. To configure ``oslo.messaging``, first you need to choose a message broker driver. The recommended driver is ``RabbitMQ``. For the ``RabbitMQ`` drivers please see the :ref:`notification-configuration` documentation for an explanation of common configuration options; the entire list of configuration options is found in the :oslo.messaging-doc:`oslo_messaging_rabbit documentation `. These options will also be present in the generated sample configuration file. For instructions on creating the configuration file please see the :doc:`configuration-guide`. .. _distributed-periodic-tasks: Distributed periodic tasks configuration ---------------------------------------- If sahara is configured to run in distributed mode (see :ref:`distributed-mode-configuration`), periodic tasks can also be launched in distributed mode. In this case tasks will be split across all ``sahara-engine`` processes. This will reduce overall load. Distributed periodic tasks are based on Hash Ring implementation and the Tooz library that provides group membership support for a set of backends. In order to use periodic tasks distribution, the following steps are required: * One of the :tooz-doc:`supported backends ` should be configured and started. * Backend URL should be set in the sahara configuration file with the ``periodic_coordinator_backend_url`` parameter. For example, if the ZooKeeper backend is being used: .. sourcecode:: cfg [DEFAULT] periodic_coordinator_backend_url=kazoo://IP:PORT * Tooz extras should be installed. When using Zookeeper as coordination backend, ``kazoo`` library should be installed. It can be done with pip: .. sourcecode:: console pip install tooz[zookeeper] * Periodic tasks can be performed in parallel. Number of threads to run periodic tasks on a single engine can be set with ``periodic_workers_number`` parameter (only 1 thread will be launched by default). Example: .. sourcecode:: cfg [DEFAULT] periodic_workers_number=2 * ``coordinator_heartbeat_interval`` can be set to change the interval between heartbeat execution (1 second by default). Heartbeats are needed to make sure that connection to the coordination backend is active. Example: .. sourcecode:: cfg [DEFAULT] coordinator_heartbeat_interval=2 * ``hash_ring_replicas_count`` can be set to change the number of replicas for each engine on a Hash Ring. Each replica is a point on a Hash Ring that belongs to a particular engine. A larger number of replicas leads to better task distribution across the set of engines. (40 by default). Example: .. sourcecode:: cfg [DEFAULT] hash_ring_replicas_count=100 .. _external_key_manager_usage: External key manager usage -------------------------- Sahara generates and stores several passwords during the course of operation. To harden sahara's usage of passwords it can be instructed to use an external key manager for storage and retrieval of these secrets. To enable this feature there must first be an OpenStack Key Manager service deployed within the stack. With a Key Manager service deployed on the stack, sahara must be configured to enable the external storage of secrets. Sahara uses the :castellan-doc:`castellan <>` library to interface with the OpenStack Key Manager service. This library provides configurable access to a key manager. To configure sahara to use barbican as the key manager, edit the sahara configuration file as follows: .. sourcecode:: cfg [DEFAULT] use_barbican_key_manager=true Enabling the ``use_barbican_key_manager`` option will configure castellan to use barbican as its key management implementation. By default it will attempt to find barbican in the Identity service's service catalog. For added control of the barbican server location, optional configuration values may be added to specify the URL for the barbican API server. .. sourcecode:: cfg [castellan] barbican_api_endpoint=http://{barbican controller IP:PORT}/ barbican_api_version=v1 The specific values for the barbican endpoint will be dictated by the IP address of the controller for your installation. With all of these values configured and the Key Manager service deployed, sahara will begin storing its secrets in the external manager. Indirect instance access through proxy nodes -------------------------------------------- .. warning:: The indirect VMs access feature is in alpha state. We do not recommend using it in a production environment. Sahara needs to access instances through SSH during cluster setup. This access can be obtained a number of different ways (see :ref:`floating_ip_management`,:ref:`custom_network_topologies`).Sometimes it is impossible to provide access to all nodes (because of limited numbers of floating IPs or security policies). In these cases access can be gained using other nodes of the cluster as proxy gateways. To enable this set ``is_proxy_gateway=true`` for the node group you want to use as proxy. Sahara will communicate with all other cluster instances through the instances of this node group. Note, if ``use_floating_ips=true`` and the cluster contains a node group with ``is_proxy_gateway=true``, the requirement to have ``floating_ip_pool`` specified is applied only to the proxy node group. Other instances will be accessed through proxy instances using the standard private network. Note, the Cloudera Hadoop plugin doesn't support access to Cloudera manager through a proxy node. This means that for CDH clusters only nodes with the Cloudera manager can be designated as proxy gateway nodes. Multi region deployment ----------------------- Sahara supports multi region deployment. To enable this option each instance of sahara should have the ``os_region_name=`` parameter set in the configuration file. The following example demonstrates configuring sahara to use the ``RegionOne`` region: .. sourcecode:: cfg [DEFAULT] os_region_name=RegionOne .. _non-root-users: Non-root users -------------- In cases where a proxy command is being used to access cluster instances (for example, when using namespaces or when specifying a custom proxy command), rootwrap functionality is provided to allow users other than ``root`` access to the needed operating system facilities. To use rootwrap the following configuration parameter is required to be set: .. sourcecode:: cfg [DEFAULT] use_rootwrap=true Assuming you elect to leverage the default rootwrap command (``sahara-rootwrap``), you will need to perform the following additional setup steps: * Copy the provided sudoers configuration file from the local project file ``etc/sudoers.d/sahara-rootwrap`` to the system specific location, usually ``/etc/sudoers.d``. This file is setup to allow a user named ``sahara`` access to the rootwrap script. It contains the following: .. sourcecode:: cfg sahara ALL = (root) NOPASSWD: /usr/bin/sahara-rootwrap /etc/sahara/rootwrap.conf * When using devstack to deploy sahara, please pay attention that you need to change user in script from ``sahara`` to ``stack``. * Copy the provided rootwrap configuration file from the local project file ``etc/sahara/rootwrap.conf`` to the system specific location, usually ``/etc/sahara``. This file contains the default configuration for rootwrap. * Copy the provided rootwrap filters file from the local project file ``etc/sahara/rootwrap.d/sahara.filters`` to the location specified in the rootwrap configuration file, usually ``/etc/sahara/rootwrap.d``. This file contains the filters that will allow the ``sahara`` user to access the ``ip netns exec``, ``nc``, and ``kill`` commands through the rootwrap (depending on ``proxy_command`` you may need to set additional filters). It should look similar to the followings: .. sourcecode:: cfg [Filters] ip: IpNetnsExecFilter, ip, root nc: CommandFilter, nc, root kill: CommandFilter, kill, root If you wish to use a rootwrap command other than ``sahara-rootwrap`` you can set the following parameter in your sahara configuration file: .. sourcecode:: cfg [DEFAULT] rootwrap_command='sudo sahara-rootwrap /etc/sahara/rootwrap.conf' For more information on rootwrap please refer to the `official Rootwrap documentation `_ Object Storage access using proxy users --------------------------------------- To improve security for clusters accessing files in Object Storage, sahara can be configured to use proxy users and delegated trusts for access. This behavior has been implemented to reduce the need for storing and distributing user credentials. The use of proxy users involves creating an Identity domain that will be designated as the home for these users. Proxy users will be created on demand by sahara and will only exist during a job execution which requires Object Storage access. The domain created for the proxy users must be backed by a driver that allows sahara's admin user to create new user accounts. This new domain should contain no roles, to limit the potential access of a proxy user. Once the domain has been created, sahara must be configured to use it by adding the domain name and any potential delegated roles that must be used for Object Storage access to the sahara configuration file. With the domain enabled in sahara, users will no longer be required to enter credentials for their data sources and job binaries referenced in Object Storage. Detailed instructions ^^^^^^^^^^^^^^^^^^^^^ First a domain must be created in the Identity service to hold proxy users created by sahara. This domain must have an identity backend driver that allows for sahara to create new users. The default SQL engine is sufficient but if your keystone identity is backed by LDAP or similar then domain specific configurations should be used to ensure sahara's access. Please see the :keystone-doc:`Keystone documentation ` for more information. With the domain created, sahara's configuration file should be updated to include the new domain name and any potential roles that will be needed. For this example let's assume that the name of the proxy domain is ``sahara_proxy`` and the roles needed by proxy users will be ``member`` and ``SwiftUser``. .. sourcecode:: cfg [DEFAULT] use_domain_for_proxy_users=true proxy_user_domain_name=sahara_proxy proxy_user_role_names=member,SwiftUser A note on the use of roles. In the context of the proxy user, any roles specified here are roles intended to be delegated to the proxy user from the user with access to Object Storage. More specifically, any roles that are required for Object Storage access by the project owning the object store must be delegated to the proxy user for authentication to be successful. Finally, the stack administrator must ensure that images registered with sahara have the latest version of the Hadoop swift filesystem plugin installed. The sources for this plugin can be found in the `sahara extra repository`_. For more information on images or swift integration see the sahara documentation sections :ref:`building-guest-images-label` and :ref:`swift-integration-label`. .. _Sahara extra repository: https://opendev.org/openstack/sahara-extra .. _volume_instance_locality_configuration: Volume instance locality configuration -------------------------------------- The Block Storage service provides the ability to define volume instance locality to ensure that instance volumes are created on the same host as the hypervisor. The ``InstanceLocalityFilter`` provides the mechanism for the selection of a storage provider located on the same physical host as an instance. To enable this functionality for instances of a specific node group, the ``volume_local_to_instance`` field in the node group template should be set to ``true`` and some extra configurations are needed: * The cinder-volume service should be launched on every physical host and at least one physical host should run both cinder-scheduler and cinder-volume services. * ``InstanceLocalityFilter`` should be added to the list of default filters (``scheduler_default_filters`` in cinder) for the Block Storage configuration. * The Extended Server Attributes extension needs to be active in the Compute service (this is true by default in nova), so that the ``OS-EXT-SRV-ATTR:host`` property is returned when requesting instance info. * The user making the call needs to have sufficient rights for the property to be returned by the Compute service. This can be done by: * by changing nova's ``policy.json`` to allow the user access to the ``extended_server_attributes`` option. * by designating an account with privileged rights in the cinder configuration: .. sourcecode:: cfg os_privileged_user_name = os_privileged_user_password = os_privileged_user_tenant = It should be noted that in a situation when the host has no space for volume creation, the created volume will have an ``Error`` state and can not be used. Autoconfiguration for templates ------------------------------- :doc:`configs-recommendations` NTP service configuration ------------------------- By default sahara will enable the NTP service on all cluster instances if the NTP package is included in the image (the sahara disk image builder will include NTP in all images it generates). The default NTP server will be ``pool.ntp.org``; this can be overridden using the ``default_ntp_server`` setting in the ``DEFAULT`` section of the sahara configuration file. If you are creating cluster templates using the sahara UI and would like to specify a different NTP server for a particular cluster template, use the ``URL of NTP server`` setting in the ``General Parameters`` section when you create the template. If you would like to disable NTP for a particular cluster template, deselect the ``Enable NTP service`` checkbox in the ``General Parameters`` section when you create the template. If you are creating clusters using the sahara CLI, you can specify another NTP server or disable NTP service using the examples below. If you want to enable configuring the NTP service, you should specify the following configs for the cluster: .. sourcecode:: json { "cluster_configs": { "general": { "URL of NTP server": "your_server.net" } } } If you want to disable configuring NTP service, you should specify following configs for the cluster: .. sourcecode:: json { "cluster_configs": { "general": { "Enable NTP service": false } } } CORS (Cross Origin Resource Sharing) Configuration -------------------------------------------------- Sahara provides direct API access to user-agents (browsers) via the HTTP CORS protocol. Detailed documentation, as well as troubleshooting examples, may be found in the :oslo.middleware-doc:`documentation of the oslo.db cross-project features `. To get started quickly, use the example configuration block below, replacing the :code:`allowed origin` field with the host(s) from which your API expects access. .. sourcecode:: cfg [cors] allowed_origin=https://we.example.com:443 max_age=3600 allow_credentials=true [cors.additional_domain_1] allowed_origin=https://additional_domain_1.example.com:443 [cors.additional_domain_2] allowed_origin=https://additional_domain_2.example.com:443 For more information on Cross Origin Resource Sharing, please review the `W3C CORS specification`_. .. _W3C CORS specification: http://www.w3.org/TR/cors/ Cleanup time for incomplete clusters ------------------------------------ Sahara provides maximal time (in hours) for clusters allowed to be in states other than "Active", "Deleting" or "Error". If a cluster is not in "Active", "Deleting" or "Error" state and last update of it was longer than ``cleanup_time_for_incomplete_clusters`` hours ago then it will be deleted automatically. You can enable this feature by adding appropriate config property in the ``DEFAULT`` section (by default it set up to ``0`` value which means that automatic clean up is disabled). For example, if you want cluster to be deleted after 3 hours if it didn't leave "Starting" state then you should specify: .. sourcecode:: cfg [DEFAULT] cleanup_time_for_incomplete_clusters = 3 Security Group Rules Configuration ---------------------------------- When auto_security_group is used, the amount of created security group rules may be bigger than the default values configured in ``neutron.conf``. Then the default limit should be raised up to some bigger value which is proportional to the number of cluster node groups. You can change it in ``neutron.conf`` file: .. sourcecode:: cfg [quotas] quota_security_group = 1000 quota_security_group_rule = 10000 Or you can execute openstack CLI command: .. sourcecode:: console openstack quota set --secgroups 1000 --secgroup-rules 10000 $PROJECT_ID sahara-12.0.0/doc/source/admin/configs-recommendations.rst0000664000175000017500000000360213656752032023571 0ustar zuulzuul00000000000000:orphan: Autoconfiguring templates ========================= During the Liberty development cycle sahara implemented a tool that recommends and applies configuration values for cluster templates and node group templates. These recommendations are based on the number of specific instances and on flavors of the cluster node groups. Currently the following plugins support this feature: * CDH; * Ambari; * Spark; * the Vanilla Apache Hadoop plugin. By default this feature is enabled for all cluster templates and node group templates. If you want to disable this feature for a particular cluster or node group template you should set the ``use_autoconfig`` field to ``false``. .. NOTE Also, if you manually set configs from the list below, the recommended configs will not be applied. The following describes the settings for which sahara can recommend autoconfiguration: The Cloudera, Spark and Vanilla Apache Hadoop plugin support configuring ``dfs.replication`` (``dfs_replication`` for Cloudera plugin) which is calculated as a minimum from the amount of ``datanode`` (``HDFS_DATANODE`` for Cloudera plugin) instances in the cluster and the default value for ``dfs.replication``. The Vanilla Apache Hadoop plugin and Cloudera plugin support autoconfiguration of basic YARN and MapReduce configs. These autoconfigurations are based on the following documentation: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1-11.html The Ambari plugin has its own strategies on configuration recommendations. You can choose one of ``ALWAYS_APPLY``, ``NEVER_APPLY``, and ``ONLY_STACK_DEFAULTS_APPLY``. By default the Ambari plugin follows the ``NEVER_APPLY`` strategy. You can get more information about strategies in Ambari's official documentation: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-ClusterCreationTemplateStructure sahara-12.0.0/doc/source/_templates/0000775000175000017500000000000013656752227017274 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/_templates/sidebarlinks.html0000664000175000017500000000051213656752032022624 0ustar zuulzuul00000000000000

Useful Links

{% if READTHEDOCS %} {% endif %} sahara-12.0.0/doc/source/configuration/0000775000175000017500000000000013656752227020006 5ustar zuulzuul00000000000000sahara-12.0.0/doc/source/configuration/sampleconfig.rst0000664000175000017500000000027113656752032023201 0ustar zuulzuul00000000000000Sample sahara.conf file ======================= This is an automatically generated sample of the sahara.conf file. .. literalinclude:: ../sample.config :language: ini :linenos: sahara-12.0.0/doc/source/configuration/descriptionconfig.rst0000664000175000017500000000034713656752032024247 0ustar zuulzuul00000000000000Configuration options ===================== This section provides a list of the configuration options that can be set in the sahara configuration file. .. show-options:: :config-file: tools/config/config-generator.sahara.conf sahara-12.0.0/doc/source/configuration/index.rst0000664000175000017500000000021513656752032021637 0ustar zuulzuul00000000000000======================= Configuration Reference ======================= .. toctree:: :maxdepth: 1 descriptionconfig sampleconfig sahara-12.0.0/doc/test/0000775000175000017500000000000013656752227014616 5ustar zuulzuul00000000000000sahara-12.0.0/doc/test/redirect-tests.txt0000664000175000017500000000166313656752032020320 0ustar zuulzuul00000000000000/sahara/pike/contributor/launchpad.html 301 /sahara/pike/contributor/project.html /sahara/queens/contributor/launchpad.html 301 /sahara/queens/contributor/project.html /sahara/latest/contributor/launchpad.html 301 /sahara/latest/contributor/project.html /sahara/latest/user/vanilla-imagebuilder.html 301 /sahara/latest/user/vanilla-plugin.html /sahara/latest/user/cdh-imagebuilder.html 301 /sahara/latest/user/cdh-plugin.html /sahara/latest/user/guest-requirements.html 301 /sahara/latest/user/building-guest-images.html /sahara/rocky/user/guest-requirements.html 301 /sahara/rocky/user/building-guest-images.html /sahara/latest/user/vanilla-plugin.html 301 /sahara-plugin-vanilla/latest/ /sahara/stein/user/storm-plugin.html 301 /sahara-plugin-storm/stein/ /sahara/latest/contributor/how-to-participate.html 301 /sahara/latest/contributor/contributing.html /sahara/latest/contributor/project.html 301 /sahara/latest/contributor/contributing.html sahara-12.0.0/requirements.txt0000664000175000017500000000312013656752032016344 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 alembic>=0.8.10 # MIT botocore>=1.5.1 # Apache-2.0 castellan>=0.16.0 # Apache-2.0 eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT Flask>=1.0.2 # BSD iso8601>=0.1.11 # MIT Jinja2>=2.10 # BSD License (3 clause) jsonschema>=2.6.0 # MIT keystoneauth1>=3.4.0 # Apache-2.0 keystonemiddleware>=4.17.0 # Apache-2.0 microversion-parse>=0.2.1 # Apache-2.0 oslo.config>=5.2.0 # Apache-2.0 oslo.concurrency>=3.26.0 # Apache-2.0 oslo.context>=2.19.2 # Apache-2.0 oslo.db>=4.27.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 oslo.rootwrap>=5.8.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 oslo.upgradecheck>=0.1.0 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 paramiko>=2.0.0 # LGPLv2.1+ requests>=2.14.2 # Apache-2.0 python-cinderclient!=4.0.0,>=3.3.0 # Apache-2.0 python-keystoneclient>=3.8.0 # Apache-2.0 python-manilaclient>=1.16.0 # Apache-2.0 python-novaclient>=9.1.0 # Apache-2.0 python-swiftclient>=3.2.0 # Apache-2.0 python-neutronclient>=6.7.0 # Apache-2.0 python-heatclient>=1.10.0 # Apache-2.0 python-glanceclient>=2.8.0 # Apache-2.0 six>=1.10.0 # MIT stevedore>=1.20.0 # Apache-2.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT tooz>=1.58.0 # Apache-2.0 WebOb>=1.7.1 # MIT sahara-12.0.0/setup.cfg0000664000175000017500000000553313656752227014721 0ustar zuulzuul00000000000000[metadata] name = sahara summary = Sahara project description-file = README.rst license = Apache Software License python-requires = >=3.6 classifiers = Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux author = OpenStack author-email = openstack-discuss@lists.openstack.org home-page = https://docs.openstack.org/sahara/latest/ [files] packages = sahara data_files = etc/sahara = etc/sahara/api-paste.ini etc/sahara/rootwrap.conf etc/sahara/rootwrap.d = etc/sahara/rootwrap.d/* [entry_points] console_scripts = sahara-all = sahara.cli.sahara_all:main sahara-api = sahara.cli.sahara_api:main sahara-engine = sahara.cli.sahara_engine:main sahara-db-manage = sahara.db.migration.cli:main sahara-rootwrap = oslo_rootwrap.cmd:main _sahara-subprocess = sahara.cli.sahara_subprocess:main sahara-templates = sahara.db.templates.cli:main sahara-image-pack = sahara.cli.image_pack.cli:main sahara-status = sahara.cli.sahara_status:main wsgi_scripts = sahara-wsgi-api = sahara.cli.sahara_api:setup_api sahara.cluster.plugins = fake = sahara.plugins.fake.plugin:FakePluginProvider sahara.data_source.types = hdfs = sahara.service.edp.data_sources.hdfs.implementation:HDFSType manila = sahara.service.edp.data_sources.manila.implementation:ManilaType maprfs = sahara.service.edp.data_sources.maprfs.implementation:MapRFSType swift = sahara.service.edp.data_sources.swift.implementation:SwiftType s3 = sahara.service.edp.data_sources.s3.implementation:S3Type sahara.job_binary.types = internal-db = sahara.service.edp.job_binaries.internal_db.implementation:InternalDBType manila = sahara.service.edp.job_binaries.manila.implementation:ManilaType swift = sahara.service.edp.job_binaries.swift.implementation:SwiftType s3 = sahara.service.edp.job_binaries.s3.implementation:S3Type sahara.infrastructure.engine = heat = sahara.service.heat.heat_engine:HeatEngine sahara.remote = ssh = sahara.utils.ssh_remote:SshRemoteDriver sahara.run.mode = all-in-one = sahara.service.ops:LocalOps distributed = sahara.service.ops:RemoteOps oslo.config.opts = sahara.config = sahara.config:list_opts oslo.config.opts.defaults = sahara.config = sahara.common.config:set_cors_middleware_defaults oslo.policy.policies = sahara = sahara.common.policies:list_rules [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = sahara/locale/sahara.pot [compile_catalog] directory = sahara/locale domain = sahara [update_catalog] domain = sahara output_dir = sahara/locale input_file = sahara/locale/sahara.pot [egg_info] tag_build = tag_date = 0 sahara-12.0.0/tox.ini0000664000175000017500000001270213656752032014401 0ustar zuulzuul00000000000000[tox] envlist = py37,pep8,genpolicy minversion = 1.6 skipsdist = True # this allows tox to infer the base python from the environment name # and override any basepython configured in this file ignore_basepython_conflict = true [testenv] basepython = python3 usedevelop = True install_command = pip install {opts} {packages} setenv = VIRTUAL_ENV={envdir} DISCOVER_DIRECTORY=sahara/tests/unit deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = stestr run {posargs} passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY [testenv:cover] setenv = PACKAGE_NAME=sahara commands = {toxinidir}/tools/cover.sh {posargs} [testenv:debug-py36] basepython = python3.6 commands = oslo_debug_helper -t sahara/tests/unit {posargs} [testenv:debug-py37] basepython = python3.7 commands = oslo_debug_helper -t sahara/tests/unit {posargs} [testenv:pep8] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt -r{toxinidir}/doc/requirements.txt commands = flake8 {posargs} doc8 doc/source # Run bashate checks bash -c "find devstack -not -name \*.template -and -not -name README.rst -and -not -name \*.json -type f -print0 | xargs -0 bashate -v" # Run security linter bandit -c bandit.yaml -r sahara -n5 -p sahara_default -x tests [testenv:genpolicy] commands = oslopolicy-sample-generator --config-file tools/config/sahara-policy-generator.conf [testenv:venv] commands = {posargs} [testenv:images] sitepackages = True commands = {posargs} [testenv:docs] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/doc/requirements.txt commands = rm -rf doc/html doc/build rm -rf api-ref/build api-ref/html rm -rf doc/source/apidoc doc/source/api sphinx-build -W -b html doc/source doc/build/html sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html whereto doc/source/_extra/.htaccess doc/test/redirect-tests.txt whitelist_externals = rm [testenv:api-ref] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/doc/requirements.txt install_command = pip install -U --force-reinstall {opts} {packages} commands = rm -rf api-ref/build api-ref/html sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html whitelist_externals = rm [testenv:pylint] setenv = VIRTUAL_ENV={envdir} commands = bash tools/lintstack.sh [testenv:genconfig] commands = oslo-config-generator --config-file tools/config/config-generator.sahara.conf \ --output-file etc/sahara/sahara.conf.sample [testenv:releasenotes] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/doc/requirements.txt commands = rm -rf releasenotes/build releasenotes/html sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html whitelist_externals = rm [testenv:debug] # It runs tests from the specified dir (default is sahara/tests) # in interactive mode, so, you could use pbr for tests debug. # Example usage: tox -e debug -- -t sahara/tests/unit some.test.path # https://docs.openstack.org/oslotest/latest/features.html#debugging-with-oslo-debug-helper commands = oslo_debug_helper -t sahara/tests/unit {posargs} [testenv:bandit] deps = -r{toxinidir}/test-requirements.txt commands = bandit -c bandit.yaml -r sahara -n5 -p sahara_default -x tests [flake8] show-source = true builtins = _ exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools # [H904] Delay string interpolations at logging calls # [H106] Don't put vim configuration in source files # [H203] Use assertIs(Not)None to check for None. # [H204] Use assert(Not)Equal to check for equality # [H205] Use assert(Greater|Less)(Equal) for comparison enable-extensions=H904,H106,H203,H204,H205 # [E123] Closing bracket does not match indentation of opening bracket's line # [E226] Missing whitespace around arithmetic operator # [E402] Module level import not at top of file # [E731] Do not assign a lambda expression, use a def # [W503] Line break occurred before a binary operator # [W504] Line break occurred after a binary operator # [W605] Invalid escape sequence 'x' ignore=E123,E226,E402,E731,W503,W504,W605 [hacking] import_exceptions = sahara.i18n [flake8:local-plugins] extension = S361 = checks:import_db_only_in_conductor S362 = checks:hacking_no_author_attr S363 = checks:check_oslo_namespace_imports S364 = commit_message:OnceGitCheckCommitTitleBug S365 = commit_message:OnceGitCheckCommitTitleLength S368 = checks:dict_constructor_with_list_copy S373 = logging_checks:no_translate_logs S374 = logging_checks:accepted_log_levels S375 = checks:use_jsonutils S360 = checks:no_mutable_default_args paths = ./sahara/utils/hacking [testenv:bindep] # Do not install any requirements. We want this to be fast and work even if # system dependencies are missing, since it's used to tell you what system # dependencies are missing! This also means that bindep must be installed # separately, outside of the requirements files. deps = bindep commands = bindep test [testenv:lower-constraints] deps = -c{toxinidir}/lower-constraints.txt -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt sahara-12.0.0/ChangeLog0000664000175000017500000046017113656752226014654 0ustar zuulzuul00000000000000CHANGES ======= 12.0.0 ------ * Monkey patch original current\_thread \_active * Imported Translations from Zanata * Update TOX\_CONSTRAINTS\_FILE for stable/ussuri * Update .gitreview for stable/ussuri 12.0.0.0rc1 ----------- * Ussuri contributor docs community goal * Update hacking for Python3 * Use unittest.mock instead of third party mock * Cleanup Python 2.7 support * (Temporarily) skip TestVerifications tests * Fix syntax error in image widths * [ussuri][goal] Drop python 2.7 support and testing * Migrate grenade jobs to py3 * Fix misspell word * fix invaild link of installation guide in Sahara UI User Guide * Switch to Ussuri jobs * grenade: start from train, disable heat integration tests * Python 3 fixes * Update master for stable/train 11.0.0.0rc1 ----------- * Update the constraints url * Add more cross-functional jobs (TripleO, OSA) * Fix unit tests: no more cinderclient v1 * Fixing broken links and removing outdated driver * Update api-ref location * Remove a monkey-patching workaround for python < 2.7.3 * Limit envlist to py37 for Python 3 Train goal * Imported Translations from Zanata * Imported Translations from Zanata * Bump the openstackdocstheme extension to 1.20 * devstack: do not use configure\_auth\_token\_middleware * Blacklist python-cinderclient 4.0.0 * Remove some files not worth maintaining * Update keystone\_authtoken config reference * Fix requirements (bandit, sphinx, jsonschema) and jobs * Update Python 3 test runtimes for Train * Add a required dep to fix the buildimages jobs * [Trivial fix]Remove unnecessary slash * doc: additional git.openstack.org->opendev.org replacement * Replace git.openstack.org URLs with opendev.org URLs * OpenDev Migration Patch * Dropping the py35 testing * Replace openstack.org git:// URLs with https:// * Imported Translations from Zanata * Update master for stable/stein 10.0.0 ------ * doc: refer to the split plugin documentation * Making Sahara Python 3 compatible * grenade: re-enable, really test rocky->master * Fix the lower-requirements job: libpq-dev, psycopg 2.7 * Add missing ws seperator between words * Use authorize instead of enforce for policies * Fixing policies inconsistencies * Add API v2 jobs (scenario, tempest); buildimages fixes * add python 3.7 unit test job * Adapt to the additional rules from pycodestyle 2.5.0 * Fixing NTP issues for CDH plugin * Adding spark build image job * Changing hdfs fs to hdfs dfs * Dynamically loading plugins * Add missing ws separator between words * Make sure that default\_ntp\_server option is exported * Fix version discovery for Python 3 10.0.0.0b1 ---------- * Prepare Sahara core for plugin split * Declare APIv2 stable and CURRENT * Give the illusion of microversion support * Some polish for APIv2 * API v2: fix "local variable 'c' referenced before assignment" * APIv2 - Fix 500 on malformed query string on * Enhance boot from volume * APIv2 - api-ref documentation for APIv2 * Deploying Sahara with unversioned endpoints * Fix validation of job binary with Python3 * Migrate away from oslo\_i18n.enable\_lazy() * APIv2 Changing return payload to project\_id * Fixing cluster scale * doc: Fix the snippet in "The Script Validator" section * String-related fixes for Python 3 * fixed word error * Add DEBIAN\_FRONTEND=noninteractive in front of apt-get install commands * Bump the version of hacking to 1.1.0, with few fixes * Update devel info: mailing list, meeting time * Update http link to https * Add python 3.6 unit test job * Add framework for sahara-status upgrade check * doc: restructure the image building documentation * Fixing image validation for Ambari 2.3 * Cleanup tox.ini constraint handling * Increase the startup time of ambari-server to 180s * Increment versioning with pbr instruction * Fix a typo on Storm plugin cluster info (Strom -> Storm) * sahara-image-pack: use curl for tarballs.openstack.org * sahara-image-pack: remove bashisms from shell scripts * adds unit test for ssh\_remote.replace\_remote\_line * Force the format of ssh key to PEM, at least for now * Add template param for ambari pkg install timeout * Use templates lower-constraints, update cover job * grenade: relevant fixes for master (sahara-api/apache) * doc: update distro information and cloud-init users * Fixed link for more information about Ambari images * Correct repo\_id\_map for hdp 2.5 * Make sahara-grenade job voting on the "gate" queue too * Import the legacy grenade sahara job * Correct Hbase ports in Ambari plugin * Fixing anti-affinity for Sahara * add python 3.6 unit test job * switch documentation job to new PTI * import zuul job settings from project-config * Imported Translations from Zanata * Update reno for stable/rocky 9.0.0.0rc1 ---------- * Imported Translations from Zanata * Adapt to Keystone changes: use member instead of Member * Add some S3 doc * Enable also ambari by default in devstack * Another small fix for cluster creation on APIv2 * S3 data source URL format change * Sets correct permission for /etc/hosts * Fixing cluster creation on APIv2 * Allow overriding of /etc/hosts entries 9.0.0.0b3 --------- * Enable mutable config in sahara * Adding Ambari 2.6 to image pack * Adding Storm 1.2.0 and 1.2.1 * Unversioned endpoint recommendation * api-ref: move to a v1.1 sub-folder * Trivial: Update Zuul Status Page to correct URL * Switch make\_json\_error back to being a function * Final fixup to APIv2 responses * Deprecate sahara-all * Switch hive\_enable\_db\_notification's default value * S3 data source * Switch the coverage tox target to stestr * Updating Spark versions * Fixing extjs check on cdh and mapr * Switch ostestr to stestr * Bump Flask version according requirements * Fix flask.request.content\_length is None * Use register\_error\_handler to register make\_json\_error * Boot from volume * Remove any reference to pre-built images * Updating plugins status for Rocky * Adding CDH 5.13 * Replace the deleted keypair in clusters for API v2 * Better default value for domain in swift config * Improve force delete * Updated oozie version * Fix the code repository for clone action * add release notes to readme.rst * doc: light cleanup of the ironic-integration page * doc: external link helper for other projects' doc * Update the command to change the hostname 9.0.0.0b2 --------- * fix tox python3 overrides * Check node processes earlier * [APIv2]Consolidate cluster creation endpoints * Add support to deploy hadoop 2.7.5 * Restore Ambari with newer JDK security policies * Fixing java version for Ambari * Switch from sahara-file to tarballs.o.o for artifacts * Deploy using wsgi by default * Fix: really install extjs in CDH images at build time * doc: add the redirect for a file recently renamed * Fix the detection of scala version (now https) * Fix the installation of Swift Hadoop connector (Ambari) * Fix the installation of the Swift Hadoop connector (CDH) * fix a typo: s/avaliable/available * Remove the (now obsolete) pip-missing-reqs tox target * Replace Chinese punctuation with English punctuation * Fix the openstack endpoint create failed * Fix: always use kafka 2.2 for CDH 5.11 * Adding Ambari missing versions 9.0.0.0b1 --------- * Extend config-grabbing magic to new oslo.config * Adding ntpdate and Scala to mapr image * Change doc registering-image image message * Remove step upload package to oozie/sharelib * uncap eventlet * Fix MapR dependency on mysql on RHEL * correct lower-constraints * Support of HDP 2.6 * Follow the new PTI for document build * Updated from global requirements * add lower-constraints job * File copy timesout when file is too big * Preload soci-mysql and soci on RHEL7 images * Migration to Storyboard * Updated from global requirements * Updated from global requirements * Updated from global requirements * Adding support for RHEL images * Remove unused module * change python-libguestfs to python-guestfs for ubuntu * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements * Update mysql connection in configuration-guide.rst * Imported Translations from Zanata * Fix Spark EDP job failed in vanilla 2.8.2 * Fix documents title format error * Migrate the artifact link to sahara-extra, use https * Updated from global requirements * Updated from global requirements * Adding Ambari 2.4.2.0 to image gen * Native Zuul v3 jobs (almost all of them) * Change some parameters to be required in api-ref * Fix the parameter in api-ref * Imported Translations from Zanata * Update reno for stable/queens 8.0.0 ----- * Small doc fixes found during doc day * Fixes for the dashboard guide (title, formatting) * Adding Storm doc * Switch sahara swift to work with keystone v3 * Replace chinese quotes * EDP doc: de-emphasize job binary internals (not in v2) * Enable hacking-extensions H204, H205 * Adding sahara-policy-generator.conf * use . instead of source 8.0.0.0b3 --------- * Add support to deploy Hadoop 2.8.2 * Tweak Sahara to make version discovery easier * Various server-side fixes to APIv2 * Fix Flask error\_handler\_spec * Dynamically add python version into launch\_command * Updated from global requirements * Remove use of unsupported TEMPEST\_SERVICES variable * Replace assertFalse/assertTrue(a in b) * Stop abusing [keystone\_authtoken] * Update url links in doc files of Sahara * Updated from global requirements * Changing expected value to job\_template\_id * Updated from global requirements * Updated from global requirements * add bugs link in README.rst * Image generation for MapR * Force deletion of clusters * Rename 'SAHARA\_AUTO\_IP\_ALLOCATION\_ENABLED' config parameter * Use default log levels overriding Sahara-specific only * Decommission of a specific node * Updated from global requirements * RHEL: fix distro detection and EPEL configuration * S3 job binary and binary retriever * Updated from global requirements * Updated from global requirements * Updated from global requirements * [APIv2]Enable APIv2, experimentally 8.0.0.0b2 --------- * Fix scaling validation error * [APIv2]Add ability to export templates to APIv2 * Upgrading Spark to version 2.2 * Updated from global requirements * Updated from global requirements * Remove extra "$" in sahara-on-ironic.rst * [APIv2]Nix custom OpenStack-Project-ID header * Revise the installation guide * [APIv2] Remove job-binary-internal endpoint * Updated from global requirements * Update designate manual installation URL * Update Anti-affinity Feature description * Remove use\_neutron from config * Add kolla installation guide * Update hadoop's distcp command URL * Updated from global requirements * Remove setting of version/release from releasenotes * Updated from global requirements * Update RDO URL * Updated from global requirements * Add ZooKeeper support in Vanilla cluster * Incorrect indent Sahara Installation Guide in sahara * Updated from global requirements * Spark History Server in Vanilla auto sec group * Image generation for CDH 5.11.0 * Use non corrupted libext from image * Policy in code for Sahara 8.0.0.0b1 --------- * Image generation for CDH 5.9.0 * TrivialFix: Redundant alias in import statement * Add Cluster validation before scaling * Image generation for Ambari Plugin * Add NGT resources validation before scaling cluster * Fix typo in advanced-configuration-guide.rst and manager.py * Updated from global requirements * devstack plugin: set two parameters required by Keystone v3 * Allow cluster create with no security groups * Fix Storm 1.1.0 EDP configs * Remove SCREEN\_LOGDIR from devstack setting * Updated from global requirements * Add default configuration files to data\_files * Updated from global requirements * Document glance and manila options in the sample config file * Updated from global requirements * architecture: remove the references to Trove and Zaqar * Re-add .testr.conf, required by the cover test * Updated from global requirements * [ut] replace .testr.conf with .stestr.conf * Fix instances schema doesn't sync with nova instance * fix duplicated ntp configuration * Auth parameters: accept and set few default values * grenade: do not use the removed glance v1 API * Updated from global requirements * Add docs about template portability * Updated from global requirements * Add export of cluster templates * Optimize model relationships (avoid joins, prefer subquery) * writing convention: do not use “-y” for package install * Fix to use "." to source script files * Replace http with https for doc links in sahara * Updated from global requirements * Updated from global requirements * Fix CDH default templates * Fix invalid JSON for Vanilla default cluster template * doc: point to the main git repository and update links * Updated from global requirements * Updated from global requirements * Add CDH validation for attached volume size * doc: generate the list of configuration option * Cleanup the last warning on doc building (html and man) * bindep: depends on gettext (release notes translations) * Imported Translations from Zanata * Update reno for stable/pike 7.0.0.0rc1 ---------- * Adding reno regarding ironic support * Fully switch to keystone authtoken parameters * Fix the broken links * Fix unimplemented abstractmethod * Updated from global requirements * enable heat during devstack installation * Better keystonemiddleware log level * Restructure the documentation according the new spec * Deprecate Spark 1.3.1 * Fix TypeError when get resource list * Fix UnicodeEncoding Error * Enable some off-by-default checks * Fix error during node group template update 7.0.0.0b3 --------- * Updated from global requirements * Support of CDH 5.11.0 * Fix export of node group templates * Bad request exception for unsupported content type * Updated from global requirements * Updated from global requirements * Updating default templates * Updated from global requirements * Image generation for CDH Plugin * Updated from global requirements * Updated from global requirements * Update the documentation link for doc migration * Globalize regex objects * Update Documention link * Updated from global requirements * Enable warnings as errors for doc building * Regenerate sample.config, included in the doc * Fixes the "tox -e docs" warnings * Add export of node group templates * Enable H904 check * Allow proxy\_command to optionally use internal IP * doc: update the configuration of the theme * Update log translation hacking rule * Updated from global requirements * Fix direct patches of methods in test\_versionhandler.py * Add test to sahara/plugins/vanilla/hadoop2/scaling.py * Add test to sahara/plugins/vanilla/hadoop2/run\_scripts.py * doc: switch to openstackdocstheme and add metadata * Fixes a typo in quickstart.rst * Updated from global requirements * Fix wrong patch in unit tests * Updated from global requirements * remove workaround in grenade * Add test to sahara/plugins/vanilla/hadoop2/starting\_scripts.py * Add test to edp\_engine.py * Update dashboard doc * Add test to sahara/plugins/vanilla/hadoop2/oozie\_helper.py * Add test to sahara/plugins/vanilla/hadoop2/config\_helper.py * Add test to sahara/plugins/vanilla/v2\_7\_1/config\_helper.py * Updated from global requirements * Updated from global requirements * Add test to sahara/plugins/vanilla/v2\_7\_1/versionhandler.py * Fixed grenade job * Remove deprecated oslo\_messaging.get\_transport 7.0.0.0b2 --------- * Updated from global requirements * Updated from global requirements * Updated from global requirements * Use neutronclient for all network operations * Changing reconcile to test\_only * Raise better exception for Spark master validation * Support cinder API version 3 * Updated from global requirements * Remove ancient mailmap * Fix the tox environment used for image building * Trivial fix typos in documents * Basic script for pack-based build image * Remove usage of parameter enforce\_type * [APIv2] Refactor job cancel operation * [APIv2] Refactor job refresh status * Updated from global requirements * \_get\_os\_distrib() can return 'redhat', add mapping (2) * [APIv2] Rename oozie\_job\_id * Updated from global requirements * Fixing env vars within bash scripts for image gen * added timeout function in health check function * Remove log translations * Updated from global requirements * Fix doc generation for Python3 * Refactor unit test of cdh plugin * Refactor rest of CDH plugin code * refactor CDH db\_helper * Remove outdated judgment statement * Inefficient validation checks * Remove log translations * [APIv2] Rename hadoop\_version 7.0.0.0b1 --------- * Remove log translations * Adding labels support to Storm * Added support to Storm 1.1.0 * Remove log translations * [Trivial] Remove redundant call to str * Add sem-ver flag so pbr generates correct version * Upgrading Spark version to 2.1.0 * [storm] improve nimbus validation * \_get\_os\_distrib() can return 'redhat', add mapping * Updated from global requirements * [APIv2] Convert update methods to use PATCH * Use HostAddressOpt for opts that accept IP and hostnames * Apply monkeypatching from eventlet before the tests starts * install saharaclient from pypi if not from source * Fix some reST field lists in docstrings * Adds information about using bash to documentation * Deprecate CDH-5.5.0 * Code integration with the abstractions * Remove old oslo.messaging transport aliases * Add ability to install with Apache in devstack * Replaced uuid.uuid4 with uuidutils.generate\_uuid() * Support Job binary pluggability * Fix logging inside of devstack plugin * Add missing tests to ambari/configs.py * Updated from global requirements * Updated from global requirements * Support Data Source pluggability * Add missing tests to plugin ambari * Removing the cdh 5.0,5.3 and 5.4 * Add missing test to ambari client * cors: update default configuration * Indicating the location tests directory in oslo\_debug\_helper * [APIv2] Refactor job execute endpoint * Fixes python syntax error * Remove unused logging import * [APIv2] Further rename endpoint of jobs & job\_executions * Fix api-ref build * Adding missing tests to utils/test\_cluster.py * Update validation unit test for all Vanilla processes * Updated from global requirements * [Fix gate]Update test requirement * Backward slash is missing * Add missing tests to utils/proxy.py * Updated from global requirements * Add missing tests to test\_trusts.py * Respect Apache's trademark as per docs * Changed the spelling mistake * Fixing manila microversion setting in sahara.conf * Configure the publicURL instead of adminURL in devstack * Fixing Create hbase common lib shows warnings * Adding missing tests to ambari test\_client * Add missing test to api/middleware/auth\_valid.py * add test to plugins/ambari/client.py * Remove doc about config option verbose * Adding test\_validate() to storm plugin test * Updated from global requirements * [Doc] Update supported plugin description * Updated from global requirements * Improving tests for plugin utils * Add test\_get\_nodemanagers() * [APIv2] remove a method that places in wrong file * [APIv2] Migrate v1 unit test to test v2 API * Updated from global requirements * Add test\_get\_config\_value() * [Doc] Fix error in docs * Add test\_add\_host\_to\_cluster() * Remove support for py34 * Add test\_get\_port\_from\_address() * [Api-ref] fix description of response parameters * Add test\_move\_from\_local() * add test\_parse\_xml\_with\_name\_and\_value() * Prepare for using standard python tests * Fixing epel-release bug on MapR cluster installation * Update reno for stable/ocata * Replacement of project name in api-ref 6.0.0 ----- * Fix unexpected removing of deprecating flag for MapR 5.1 * Remove MapR v5.0.0 * Add Kafka to MapR plugin * Fix Maria-DB installation for centos7 * Add new service versions to MapR plugin * Extend cluster provision logging of MapR plugin 6.0.0.0b3 --------- * Updated from global requirements * Updated from global requirements * [APIv2] Update registry images tagging * Updated from global requirements * Change link to mysql-connector for Oozie in MapR plugin * Fix links in tests docs * API: Updating error response codes * Add HBASE MASTER processes number validation * Updated from global requirements * Fix some doc and comments nits * Updated from global requirements * Updated from global requirements * Add test\_natural\_sort\_key() * Remove unexpected files * Updated from global requirements * Add test\_update\_plugin() * Fixing test\_cluster\_create\_list\_update\_delete() * fix syntax errors in labels.py * Set access\_policy for messaging's dispatcher * Add reno for CDH 5.9 * support of CDH 5.9.0 * Removing "def" from the methods at edp.spi * support of HDP 2.5 * Updated from global requirements * Update "Additional Details for MapReduce jobs" docs * Judgment error * Fix typo error * Adding tenant\_id to regex\_search * Correct the unit test in V5\_5\_0 * Adding tenant\_id to regex\_search * modify useless assertions * Updated from global requirements * Fix typo in cover.sh * Updated from global requirements * fix some typos 6.0.0.0b2 --------- * Problem about permission * Switch use\_neutron=true by default * Use assertGreater(len(x), 0) instead of assertTrue(len(x) > 0) * Updated from global requirements * Replace logging with oslo\_log * replace 'assertFalse' with 'assertNotEqual' * [DOC] Beutify the chapter 'sahara on ironic' * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updating list of plugins in config sample * Fix error of CDH plugin scale up more than one node * Show team and repo badges on README * Updated from global requirements * spelling fixed * definition spelling mistake * fix creation of endpoints * Updated from global requirements * Fixing endpoint type for glance client * Fixed some typos. Trivial fixes * Updated from global requirements * Provide context for castellan config validation * totally changed requred to required 6.0.0.0b1 --------- * Fix import of common libraries from Manila client * Catch correct exception in check\_cinder\_exists fct * Remove enable\_notifications option * Updated from global requirements * Updated from global requirements * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Updated from global requirements * Updated from global requirements * Fix remove not existed devices * Updated from global requirements * Fix check cinder quotas * OpenStack typo * No doctext in some ProvisioningPluginBase methods * Updated from global requirements * Fix a typo in rootwrap.conf * Fix a typo in devstack.rst * [Trivial Fix]Fix typo in test\_images.py * Constraints are ready to be used for tox.ini * Use http\_proxy\_to\_wsgi middleware * Fix response code for invalid requests * Replace 'sudo pip install' with pip\_install * Improves anti-affinity behavior in sahara * Correct the spelling error * [api-ref] Fix missprints in response codes * Enable release notes translation * Updated from global requirements * Fix wrong URL to castellan’s documentation * Remove html\_static\_path from api-ref * Fix wrong message formats * Fix typo in comment * tenant replaced to project in doc * Updated from global requirements * Fixed some fonts issue in user doc, EDP section * Remove unused config.CONF * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fix API compatibility issue * Updated from global requirements * Fix incorrect event log for ambari * [DOC] update doc about restapi * [DOC] update doc about sahara features * [doc] added description about plugin management * [DOC] Update quickstart guide * [DOC] update userdoc/edp.rst * Updated from global requirements * [DOC] update doc about mapr plugin * Add workaround for Hue on CentOS 7 * [DOC] update doc about config recommendations * [DOC] update configuration guide doc * Fix ZooKeeper check for CentOS 7 * Fill tempest.conf with Sahara-specific values * [DOC] update index and architecture docs * Updated Sahara architecture diagram * [DOC] Fix misprint in userdoc/statuses.rst * [DOC] update installation guide doc * [DOC] update doc about spark plugin * [DOC] update overview doc * [DOC] update doc about ambari plugin * [DOC] update upgrage guide * [DOC] update guest requirements doc * [DOC] Update Dashboard user guide * [DOC] Update dashboard dev environment guide * Update reno for stable/newton * Documentation fixes and updates for devref 5.0.0.0rc1 ---------- * [DOC] update doc about advanced configuration * Update link reference * [DOC] update doc about vanilla image builder * [DOC] update doc about vanilla plugin * do not use artifacts at sahara files * fix docs env * [doc] change location of swiftfs jars * [DOC] update doc about cluster statuses * [DOC] update doc about registering image * write docs about enabling kerberos * [DOC] update doc about CDH image builder * [DOC] update user doc about CDH plugin * [Doc] Small fixes according to Spark on Vanilla supporting * [Ambari] fix Ubuntu deploy * Remove entry point of sahara tempest plugin * Updated from global requirements * Remove Tempest-like tests for clients (see sahara-tests) * Deprecate MapR 5.1.0.mvr2 * Add repo configs * standardize release note page ordering * reimplement oozie client as abstract * allow configuration of strategy for UI * [DOC] Add docs about pagination abilities * Add MapR core 5.2 * [api-ref] Stop supporting os-api-ref 1.0.0 * Add new version pack for services * Add event log for HDP plugin * Update api-ref docs for Designate feature * Add Sentry service v1.6 to MapR plugin * Add custom health check for MapR plugin * Rename all ClusterContext variables to 'cluster\_context' * Replace mfs.exchange with g.copy\_file where it is possible * [DOC] Update user doc about Designate * [DOC] Fix misprints in api-ref * Spark on Vanilla Clusters * Added rack awareness in CDH plugin * [Doc] add description of "plugin update" to api ref 5.0.0.0b3 --------- * Updated from global requirements * Remove support for Spark standalone * Remove ssl config for Hue * Refactor service home dir owner setting * [Ambari] More flexible auto configuration * Fix invalid security repo * Added rack awareness in HDP plugin * Updated from global requirements * use \_LE() to wrap the error message * Added option to disable sahara db for storing job binaries * Config logABug feature for Sahara api-ref * Remove unused config.CONF * improve logging for job execution failure * Updating DOC on floating IPs change * Updated from global requirements * Fix wait conditions with SSL deployments * Enabling MapR on CentOS7 * Updated from global requirements * Fix wrong instance count in provision events * [doc] Fix some problems in docs * delete unused LOG in some files * TrivialFix: Remove logging import usused * Fix mapr cluster deployment * Remove MAPR\_USER variable * Delete useless 'pass' * Updated from global requirements * replace assertListEqual() to assertEqual() * Updated from global requirements * Error handling during hosts file generation * Replace 'lsb\_release -is' with the method 'get\_os\_distrib' * Add auto configs to HDP plugin * Correct reraising of exception * Fix wrong epel version for CentOS 7 * Clean imports in code * Adding release note to floating ips change * Updated from global requirements * Remove hardcoded password from db schema * Get ready for os-api-ref sphinx theme change * Replace old CLI calls in grenade * Updated from global requirements * Add Kafka to CDH 5.5 and CDH 5.7 * Updated from global requirements * plugins:patch is now admin only operation * Fix small bugs in pagination * Fix wrong hue-livy process name and move installation * Fix wrong owner setting for config files * copying oozie.warden to prevent failure * Updated from global requirements * Image argument validation and declaration * [ambari] support kerberos deployment * [cdh] kerberos support implementation * kerberos infra deployment impl * Fixed the error with updating the job via command line * Add sorting ability to Sahara-API * Health check for Designate * Fix configs for repos and swift urls in CDH 5.7 * Added documentation for Designate feature * Documentation for image gen CLI and framework * Updated from global requirements * Updated from global requirements * Updated from global requirements * Designate integration * Updated from global requirements * Correct reraising of exception * Updated from global requirements * Updated from global requirements * Updated from global requirements * labels for CDH plugin * Changing zookeeper path while updating conf * labels for MapR plugin * Remove hardcoded password for Oozie service * Refactor the logic around use of floating ips * Adding argument-related validators for image configurability * Configuration engine for image generation CLI * Use assertEqual() instead of assertDictEqual() * improve error message for execution with retries * remove infrastructure engine option * Add pagination ability to Sahara-API * [DOC] Added docs for sahara+ironic * [DOC] Inform operators about limited quotas * delete two unused LOG * Updated from global requirements * Remove unused LOG * Updated from global requirements * Fixing unit tests for image create * improved scaling for cdh plugin * Adding Pyleus configs to Storm plugin * Add Python 3.5 classifier and venv * Docs should use "--plugin-version" instead of "--version" * CLI for Plugin-Declared Image Declaration * make ability to return real plugins in list ops * Failed to download ext-2.2.zip from dev.sencha.com * Adding Python Jobs using Pyleus * Simplify tox hacking rule to match other projects * [DOC] Cleanup time for incomplete clusters * improvements on api for plugins 5.0.0.0b2 --------- * Resolves issue where to allow custom repo URLS * Updated from global requirements * Updated from global requirements * don't serialize auto security group if not needed * Fix typo in ambari\_plugin.rst * replace import of future to the top * fix building api ref docs * The addition of the parentheses for py3 * [DOC] Update installation guide * use sessions for creating heatclient * Fixed spelling error * forbid cluster creation without secondarynamenode * Fix subdirectory typo in sahara db template Readme file * Updated from global requirements * Upgrade Storm plugin to version 1.0.1 * Updated from global requirements * Add Impala 2.2 to MapR plugin * Support of CDH 5.7 * fixing sahara-engine setup in devstack * Fix typo in configs\_recommendations.rst * Remove outdated tools * [DOC] improve docs * Fix typo in cdh\_plugin.rst * Fix glanceclient.v2.images * Remove unecessary decorators from private interface * Ignore Nova config drive in devices list * plugins api impl * sleep before waiting requests * allow to specify notifications transport url * ability to configure endpoint type for services * Updated from global requirements * novaclient.v2.images to glanceclient migration * Updated from global requirements * Update documentation for hadoop-swift * Updated from global requirements * Updated from global requirements * [DOC] updated docs about keystone cli * Trivial: Fix wrong button name in dashboard user guide * Updated from global requirements * implement db ops for plugin's api * replace seriailization of plugin to PluginManager * Moving WADL docs to Sahara repository * Remove convert to cluster template feature * Trivial: Remove useless words in CDN image builder doc * Updated from global requirements * remove ability to create barbicanclient * Fix the ca certificate handling in the client sessions * fix grenade from mitaka upgrade * remove config groups associated with removed hosts * Updated from global requirements * workaround to fix ambari start on centos7 * Updated from global requirements * Fix provision events for installing services * New version of HDP plugin 2.4 * Display credentials info in cluster general info * Updated from global requirements * Improve timeout message when cluster create fails * Updated from global requirements * Modify HDP plugin doc for Ambari plugin 5.0.0.0b1 --------- * Fix retrieve auth\_url and python 3 jobs * Readable logging for Heat templates * Use split\_path from oslo.utils * Added "\" In quickstart guide * Corrects MapR distro selection for RHEL * Fix cluster creation with another tenant * Updated from global requirements * Added unit tests for CDH 5.5.0 deploy file * Updated from global requirements * [Trivial] Remove unnecessary executable privilege * Updated from global requirements * Code refactoring of ambari deploy processes * Fix down scaling of ambari cluster * HDP hive HDFS support * improve description of ambari plugin * Remove hdp 2.0.6 plugin * Updated from global requirements * Fix grenade * Updated from global requirements * Minimise number of auto security groups * remove verbose option in devstack plugin * use the only method to initialize client * Updated from global requirements * Resolve bug with long value to RANDSTR function * Change 'Hbase' to 'HBase' string in spark service * Updated from global requirements * Remove openstack/common related stuff * Added unit tests for ha\_helper file * Updated from global requirements * Updated from global requirements * Fix typo in Spark service * Renamed job execution and templates endpoints * Fix doc about scenario and Tempest tests * keystoneclient to keystoneauth migration * Helper method to use dnf instead of yum on fedora >=22 * PrettyTable and rfc3986 are no longer used in tests * Update the links to the RDO project * Focus the documentation on distributed mode * Updated from global requirements * cdh plugin yum install option "-y" missing * update options mentioned in tempest readme * Update hadoop swift docs * Updated from global requirements * Fix doc build if git is absent * Added new unittest to oozie module * Updated from global requirements * SPI Method to Validate Images * Added tests for sahara cli * Fix unavailable MCS link * Define context.roles with base class * Update the Administrator Guide link * Updated from global requirements * Updated from global requirements * Change property for auto creating schema * Remove unsupported services from 5.1.0 * Updated from global requirements * Updated from global requirements * Bandit password tests * Workaround for temporary Oozie bug * Fixing the bandit config * Pkg installation to ssh\_remote * fix syntax error in ui dev docs 4.0.0 ----- * Set libext path for Oozie 4.0.1, 4.1.0 * rename service api modules * Fixing grenade job * Add hadoop openstack swift jar to ambari cluster * Fix Hue integration with Spark and Hive * Move bandit to pep8 * Revert "Remove PyMySQL and psycopg2 from test-requirements.txt" * Do not build config example for readthedocs.org * Remove PyMySQL and psycopg2 from test-requirements.txt * Correctly configure Spark with Hive, HBase * Set libext path for Oozie 4.0.1, 4.1.0 * Add hive property for Hue < 0.9.0 * Updated Sahara arch diagram * Fix incorrect visualization of MapR versions * Updated volumes section in docs * Update reno for stable/mitaka * Update .gitreview for stable/mitaka 4.0.0.0rc1 ---------- * Updated UI docs * Fix staled configs for ha deployments * Use auth admin for get\_router when building proxy commands * Don't use precreated ports in heat templates * get\_admin\_context overwriting context * Inject drivers to jars in Ambari Spark engine * Deprecate HDP 2.0.6 plugin * Fix updating datasource without changing a name * register the config generator default hook with the right name * Fix a mess in config helpers * rewrite wait condition script * Run cluster verification after cluster / creation scaling * Fix HA for Resourcemanager * Add an extra copy of neutron info after run\_job * Remove cinder v1 api support * Updated from global requirements * Updating quickstart guide with openstackclient usage * Fix MapR 500 tempest test fails * Moved CORS middleware configuration into oslo-config-generator * Add MapR 5.1.0 * Fix blueprints configuration for HA 4.0.0.0b3 --------- * Do not use explict keyword arguments in image resource * Improve exception message for wait\_ambari\_requests * Added #nosec to sahara.service.coordinator package * Added #nosec to sahara.utils.hacking package * add nosec to subprocess usage in launch\_command * add nosec to remote ssh pickle usages * Refine the code for CDH PluginUtils class * Remove UI configuring for Oozie * Updated from global requirements * HA for NameNode and ResourceManager in HDP 2.2 * move heat template version to common module * No longer necessary to specify jackson-core-asl in spark classpath * Improve config description in CDH config\_helper * Remove unneeded version string check in CDH plugin * Remove unused pngmath Sphinx extension * Add Flume 1.6.0 to MapR plugin * Remove vanilla 2.6.0 in doc * Remove unsupported MapR plugin versions * Updating get\_auth\_token to use keystonemiddleware * remove hdp from the default plugin list * enable ambari plugin by default * Updating dashboard user guide post-reorg * Use the integrated tempest.lib module * Update CDH user doc for CDH 5.5.0 * Add CDH 5.5 support * CDH plugin edp engine code refactoring * CDH plugin config helper refactoring * Updated from global requirements * Use ostestr instead of the custom pretty\_tox.sh * split cloudera health checks * ambari health check implementation * Making health verification periodics distributed * Fixed typo of precendence to precedence * Fix typo in api\_validator.py * Updated from global requirements * Added #nosec for bandit check * Missing ignore\_prot\_on\_def flag * Updated from global requirements * Remove support for spark 1.0.0 * [EDP] Add suspend\_job() for sahara edp engine(oozie implementation) * Updated from global requirements * Remove vanilla 2.6.0 code * Add Spark 1.5.2 to MapR plugin * Fix in wrong substitution * Adding data source update validation * Adding more information to validation errors * Revert "Fix gate pep8" * Add default templates for spark plugin, version 1.6.0 * Updated from global requirements * Add Hue 3.9.0 to MapR plugin * Add property 'MapR-FS heap size percent' to cluster template * implement sending health notifications * cloudera health checks implementation * Added scaling support for HDP 2.2 / 2.3 * base cluster verifications implementation * Check that main-class value is not null in job execution validator * Fixes to make bandit integration tests work with sahara * honor api\_insecure parameters * Replace assertNotEqual(None,) with assertIsNotNone * Start RPC service before waiting * Add support running Sahara as wsgi app * Add test cases for CDH plugin config\_helper * CDH plugin versionhandler refactoring * Add test cases for versionhandler * Remove support of HDP 2.2 * Use the oslo.utils.reflection to extract class name * Don't use Mock.called\_once\_with that does not exist * Add regex matching for job\_executions\_list() * Add regex matching for job\_binary\_internal\_list() * Python3: Fix using dictionary keys() * Await start datanodes in Spark plugin * Updated from global requirements * Add regex matching for job\_list() * Add regex matching for job\_binary\_list() * Add regex matching for node\_group\_templates\_list() * Add regex matching for clusters\_list() * Add regex matching for data\_sources\_list() * Add regex matching for cluster\_templates\_list() * add initial v2 api * add orphan to configs recommendations * add vanilla image builder docs to index * Enabling distributed periodics in devstack * Adding doc about distributed periodics * Fix gate pep8 * Added support of Spark 1.6.0 * Distributed periodic tasks implementation * Parse properties with custom key/value separator * Updated from global requirements * Revert "Enable sahara-dashboard devstack plugin in sahara plugin" * Update bandit version * Update the devstack.rst document * Enabling cluster termination via OPS in periodics * use uppercase 'S' in word "OpenStack" * Fix spell typos * Add creation of mapr user * Fix missing configuration for mapreduce * Fix problem with zombie processes in engine * Add Hive 1.2 to MapR plugin * Add Oozie 4.2.0 to MapR plugin * Add Pig 0.15 to MapR plugin * Add Drill 1.4 to MapR plugin * Add ability for setting file mode * CDH plugin validation mudule refactoring * Add CDH plugin validation test cases * Add install priority to each service * Remove redundant tabs when add MapR repos * Remove outdated pot files * Add unit test cases for cdh plugin utils * Move notifications options into oslo\_messaging\_notifications * Updated from global requirements * Allow 'is\_public' to be set on protected resources * Add 'is\_protected' field to all default templates * Change 'ignore\_default' to 'ignore\_prot\_on\_def' * Remove overlap of 'is\_default' and 'is\_protected' for templates * correct spelling mistake * Update the link to sahara.py * Updated from global requirements * Add release notes for external key manager usage * Fix anti-affinity handling in heat engine * Where filter is not done correctly on programmatic selection * Remove scenario tests and related files * Use internal auth url to communicate with swift * Updated from global requirements * notification\_driver from group DEFAULT is deprecated 4.0.0.0b2 --------- * Migrate to new repository in gate checks * Fix python 2,3 compatibility issue with six * Fixing kwarg name for centos repository * Updated from global requirements * Fix using regions in all OS clients * Add release notes for scheduling EDP jobs * remove openstack-common.conf * Updated from global requirements * Enable sahara-dashboard devstack plugin in sahara plugin * Add a common Hive and Pig config in workflow\_factory * add cdh plugin passwords to key manager * add debug testenv in tox * add developer documentation about the key manager * Updated from global requirements * add helper functions for key manager * Setting auth\_url for token auth plugin object * Replace deprecated library function os.popen() with subprocess * Enable passwordless ssh beetween vanilla nodes * Added Keystone and RequestID headers to CORS middleware * Removed redundant list declaration * Updated from global requirements * Change assertTrue(isinstance()) by optimal assert * Fix wrong file path in scenario test README.rst * Updated from global requirements * Use run\_as\_root instead of sudo to execute\_command * Ensure default arguments are not mutable * Compare node groups in CDH plugin IMPALA validation * Add translation for log messages * Fixing cinder check with is\_proxy\_gateway * Update HA scenario for CDH * Use cfg.PortOpt for port option * Clean the code in vanilla's utils * [EDP] Add scheduling EDP jobs in sahara(oozie engine implementation) * Adding doc about data source placeholders * Remove she-bang from sahara CLI modules * Stop using unicode builtin * Initial key manager implementation * Move c\_helper, db\_helper into \_\_init\_\_ for CDH plugin\_utils * Updated from global requirements * Added check for images tags * Updated from global requirements * Replace assertEqual(None, \*) with assertIsNone in tests * Updates DevStack git repo link in Sahara Dev Ref * Implement custom check for Kafka Service * Don't configure hadoop.tmp.dir in Spark plugin * Updated from global requirements * Deprecated tox -downloadcache option removed * Updated from global requirements * Scenario templates: make is\_proxy\_gateway configurable * Added several parametrs to priority-one-confs file * Add CDH plugin edp engine unit tests * Add missing i18n module into CDH plugin edp\_engine * Add ability to get auth token from auth plugin * Trust usage improvements in sahara * Replacing all hard coded cluster status using cluster\_utils * Always enable heat service in devstack plugin * Remove unused code from volumes module * Updated from global requirements * Now updating cluster templates on update * Add log when directly return from cancel\_job * Updated from global requirements * remove the qpid message driver from the configuration file * Adds nosec to system call in NetcatSocket.\_terminate * rewrite heat client calls * Remove MANIFEST.in * Updated from global requirements * refine the development environment document * test: make enforce\_type=True in CONF.set\_override * Explicitly calling start for sahara-api in sahara-all * Adding ability disable anti\_affinty check in plugin * Remove version from setup.cfg * Force releasenotes warnings to be treated as errors 4.0.0.0b1 --------- * Override verify argument of generic session * Add missed checks for testing update method * Updated from global requirements * Optimize "open" method with context manager * Launching 1 instance in grenade instead of 2 * Updated from global requirements * Fix bashate warnings * Support of Spark EDP in Ambari plugin * Check cluster if it is None before run job * Enable heat\_enable\_wait\_condition by default * Update scenario test readme file * Add more useful information to the Heat stack description * Replacing hard coded cluster status using cluster\_utils * cleanup sahara commands * Support unmounting shares on cluster update * Updated from global requirements * Mounting changed shares on cluster update * Remove unneeded 'self' in plugins.cdh.v5\_4\_0.plugin\_utils * Drop direct engine support * Remove old integration tests for sahara codebase * Option for disabling wait condition feature * Remove unneeded volume serialization * Updated from global requirements * Doc fix: use\_floating\_ip to use\_floating\_ips * change port option from Opt to IntOpt * implement is\_sahara\_enabled * Add test cases for CDH plugin versionfactory * Adding tests for checking updating of templates * Updated from global requirements * Add "unreleased" release notes page * Support reno for release notes management * Update Sahara Dev Quickstart Guide * Updated from global requirements * Fix doc8 check failures * Rename get\_job\_status to get\_job\_info in oozie.py * Updated from global requirements * Run py34 first in default tox run * Updated from global requirements * Use oslo.service for launching sahara * Disable base repos by the option * Publish sample conf to docs * refine the sahara installation guide * Move doc8 dependency to test-requirements.txt * Fix E005 bashate error * Plugin version error in scenario test for vanilla2.6.0 * Add unit test to cover cancel job operation in oozie engine * Make ssh timeout configurable * Missing stuff for Kafka in Ambari plugin * Add default templates for MapR plugin 5.0.0 mrv1 & mrv2 * Support overriding of driver classpath in Spark jobs * Add ability validate yaml files before run tests * Remove TODO line while bug 1413602 is fixed * Add CDH test enabling HDFS HA * Add CDH 5.4.0 contents in doc * Allowing shares to be edited on cluster update * Remove TODO in the feature.rst * Fix a couple typo in EDP doc * Refine the overview.rst for sahara * Fix Spark installation fails when parsing spark-env.sh * Disable security for Oozie in Ambari * Remove verbose code for hive metastore schema creation in MapR plugin * Providing more information about fail job * Refine the doc for sahara * Fix magic method name in plugin.cdh.clent.type * Add additional filter to volume\_type check * Remove known issue from the doc * Use assertTrue/False instead of assertEqual(T/F) * Updated from global requirements * Fixing job execution creation with is\_protected field * Fixing cluster creation with is\_protected field * Adding ability to register image without description * Get Open Ports for Storm * Simplify the method \_count\_instances\_to\_attach * Add source url into README.rst * Updated from global requirements * Fix Mapr on ci * Fixing problem with validation of job binaries update * Fixing search of devices that need to be mount * Cleanup config-generator.sahara.conf * Updated from global requirements * Switched CORS configuration to use oslo\_config * Add batching for EDP jobs in scenario tests * Fixing event log handling during volumes mount * Fixing grenade job * Add support of Drill 1.2 to MapR plugin * Add -f option to formatting volumes for xfs * Hive job type support on CI * Add unit tests for AuthValidator middleware * Remove old sahara endpoint * Updated from global requirements * Add -f option to formatting volumes * Fix issue with job types in Ambari plugin * Fix tempest tests * Modify service-role view in creating node group template * Updated from global requirements * Add ability running tests on existing cluster * Reformat job flows * Bringing the Sahara Bandit config current * Add testresources used by oslo.db fixture * Use api-paste.ini for loading middleware * Add event logs for MapR plugin * code cleanup * Fixing grenade job for upgrades from liberty * Fix typos in developer documentation * Updated from global requirements * Fixing cluster creation without auto\_security\_group * Use distributed mode by default in devstack * Updated from global requirements * Adding ability run several edp jobs flows * Updated from global requirements * Add /usr/share/sahara/rootwrap to filters\_path * Fixing grenade\_job * replace multiple if stmts with dict and for loop * Fix the bug of "Error spelling of a word" * Fix the bug of "Error spelling of 'occured'" * Removed redundant metaclass declarations in MapR plugin * Fix of client tests in tempest * Added support for Spark 1.3.1 in MapR plugin * use list comprehensions * Cleanup databases during execution of hive example * Open Mitaka development 3.0.0 ----- * Use explicit version of image client in gates * Use xfs for formatting * Configurable timeouts for disk preparing * Generate random heat stack name for cluster * Resolve issue with operating heat stack outputs * Updating vanilla imagebuilder docs * Add more information about configuring NTP service * Fix problem with loading Ambari configs * Update indexes after adding security repo in MapR plugin * Add put data in HDFS for EDP testcase * Add wait condition for heat templates * Updated from global requirements * Change ignore-errors to ignore\_errors * Fix wrong init of ThreadGrop * Fix missed service message in MapR plugin * Heat stack creation with tags * Enable ceilometer services using new plugin model * Add spaces around function params for browser to linewrap on * Convert manila api version to string * [doc-day] Updated development guidelines * Adapt python client tests to use Tempest plugin interface * Python client tests: access to credentials/net clients * Formatting and mounting methods changed for ironic * HDP plugin should ignore untagged configs when creating cluster\_spec * Fixed service restart in MapR plugin * Adding check of indirect access * Fix problem with create cluster w/o internet * Improving node group templates validation * Fix incorrect function name in swift client * Adding fake plugin usage in validation ut * Update hdp plugin docs * Use get\_resource instead of Ref defenition * Create ResourceGroup with volumes only if it is required * Selects IPv4 preferentially for internal\_ip * Fix working scenario tests with swiftclient * Improving cluster templates validation * Report stack\_status\_reason on heat failure * Change nova client util to use proper client * New doc about autoconfiguration policy * Increasing time for cluster creation/deletion in grenade * cleanup spark plugin documentation * Remove mountpoint from heat stack because it always null * Fix capitalization on sahara * Updating Ubuntu Server version in devstack doc * Fixed RM HA in MapR plugin 5.0.0 MRv2 * Only add current directory to classpath for client deploy mode * Include YARN 2.7.0 to service install priority list in MapR plugin * Updated from global requirements * Documenting interface map * Set the flavor to large for the cdh 5.4.0 name node in template.conf * Use custom flavor in gate * Add SPARK\_YARN\_HISTORY\_SERVER to default templates for cdh * Register SSL cert in Java keystore to access to swift via SSL * Adding doc about shared and protected resources * Convert True to string for image registry * Fixed Hive 1.0 failure on MapR plugin * [sahara doc fix] log guidelines doc * Removed duplicated definition of support Impala in MapR plugin * Add keystone and swift url to /etc/hosts * Update plugin spi docs with new method * [sahara doc fix] guest requirements doc * [sahara doc fix] registering image doc * Enable anti\_affinity feature in scenario test * Fix mocks in scenario\_unit tests * [CDH] Fix problem with launching Spark jobs * Updating architecture doc * [sahara doc fix] update the statuses.rst in userdoc * Drop HDP 1.3.2 plugin * Drop Vanilla Hadoop 1 * Adds IPv6 support to auto security group * Updating overview document * Updating the userdoc configuration * Correcting userdoc installation guide * Minor updates to edp documentation * Modify recommend\_configs arguments in vanilla 1 * Updated from global requirements 3.0.0.0b3 --------- * Minor updates and fixes to features doc * updating index doc * updating plugins doc * Added CORS middleware to Sahara * Documentation for Manila integration * Updating userdoc overview * Add missing ssl\_verify for swift in scenario tests * [doc-day] Updated development environment guide * Updating the dashboard guide for Sahara * Updating the rest api documentation * Updating the dev environment guide for the Sahara UI * Update documentation for Vanilla plugin * Add port type on port option * Updated from global requirements * Print Heat stack before create/update to debug logs * Remove useless test dependency 'discover' * Use internalURL endpoint by default in Sahara * Use demo user and tenant for testing * Explicitly set infra engine based on job type * Use less resources in sceanrio gate job * Removed installation of Oozie sharelibs in MapR plugin * Fix problem with using auto security groups in Heat * adding developer docs guidelines about clients * Added HBase REST node process to MapR plugin * Disable autotune configs for scaling old clusters * Add sample spark wordcount job * Deprecate Vanilla 2.6.0 * Add additional HDP services * Add EDP services to new HDP plugin * Add base services support for HDP 2.2 / 2.3 * Adding support for the Spark Shell job * project\_name is changed to optional parameter * Change version package imports to correct in MapR plugin * Added support of Hue 3.8.1 to MapR plugin * Job execution cancel timeout * Rename oozie\_job\_id * adding neutron to sessions module * adding cinder to sessions module * Removing token information from debug log * Fix bash condition for enabling heat engine in devstack * Updated from global requirements * Enable YARN ResourceManager HA in CDH plugin * Changing scenario runner to use subprocess * Add CDH HDFS HA part in the user doc * Updated from global requirements * Fail if FAILED in the scenario tests run log * Actually install Heat for the Hest-based jobs * Ensure working dir is on driver class path for Spark/Swift * Add validation rules about IMPALAD * Remove unneeded starting ntp * Expose cloudera manager information * Updated from global requirements * adding nova to session cache * Adding Grenade support for Sahara * Update plugin version for transient tests to vanilla 2.7.1 * Updated from global requirements * New version of HDP plugin * Adding shared and protected resources support * Adding is\_public and is\_protected fields support * Use "get\_instances" method from sahara.plugins.utils * Doc, scenario tests: variables config file * Adding clusters\_update api call * Implement ability of creating flavor for scenario tests * Add support of SSL in scenario tests * Remove libevent installation from MapR plugin * Updated from global requirements * Add manila nfs data sources * Added support for MapR v5.0.0 * Run scenario tests for the fake plugin in gate * Add separated dir with fake plugin scenario for gate testing * Set missed admin user parameters used for trusts creation in devstack * Make tools/pretty\_tox.sh more informative and reliable * Make infra engine configurable in devstack plugin * Added support of Hadoop 2.7.0 to MapR plugin * Remove never executable code from devstack plugin * Scenario tests: store ssh key if resources are retained * doc, sahara-templates: fix typo * Add scenario gate testing placeholders * Adding job\_update api call * Adding job\_execution\_update api call * Adding sessions module and keystone client upgrade * Adding job\_binary\_internal\_update api call * Fix HBase config name when using HA with HDP 2.0.6 * Removed confusing typos in utils/openstack/base.py file * Remove README in sahara/locale * Update stackforge to openstack * Updated from global requirements * Fix wrong compute nodes name in doc * Adding HTTP PATCH method to modify existing resources * Allow Sahara native urls and runtime urls to differ for datasources * Support manila shares as binary store * Add script to report uncovered new lines * Increase coverage report precision * Add recommendation support to Cloudera plugin * Support placeholders in args of job for i/o * add unit test for test\_hdfs\_helper * Updated from global requirements * Update vanilla plugin to the latest version * Remove quotes from subshell call in install\_scala.sh * Fixed WebServer validation in MapR plugin * Update cluster UI info in MapR plugin * Prevent writing security repos twice in MapR plugin * Check ACLs before adding access for Manila share * Make starting scripts module for vanilla 2 plugin * Small refactoring for vanilla 2 * Fix MapR plugin versions loading * Put missing fields to validation schema * Remove test for job type in get\_data\_sources * add unit test cover oozie upload workflow file function * Updated from global requirements * Remove spaces from Sahara key comment * Increase internal\_ip and management\_ip column size * Drop support of deprecated 2.4.1 Vanilla plugin * Added support of Drill 1.1 to MapR plugin * Added support of HBase 0.98.12 to MapR plugin * Added support of Mahout 0.10 to MapR plugin * Added support of Hive 1.0 to MapR plugin * Add CLUSTER\_STATUS * Remove cluster status change in HDP plugin * Removed support of Hive 0.12 and Impala 1.2.3 from MapR plugin * Changed misleading function name in Heat engine * Mount share API * EDP Spark jobs work with Swift * Fix six typos on sahara documentation 3.0.0.0b2 --------- * Configure NTP service on cluster instances * Updated from global requirements * Changing log level inside execute\_with\_retries method * Updated from global requirements * Remove extra merge methods in plugins * Add configs unit test case * Change zk\_instance to zk\_instances in storm plugin * Add recommendation support for Spark plugin * Migrate to flavor field in spark 1.3.1 * Cleanup .gitignore * Ignore .eggs directory in git * Use keystone service catalog for getting auth urls * Storm job type not found * Implement recommendations for vanilla 2.6.0 * Add missing mako template for Spark 1.3.1 * Add unit test for external hdfs missed for URLs * Migrate "flavor\_id" to "flavor" in scenario tests * Remove openstack.common package * updating documentation on devstack usage * Added the ability to specify the name of the flavor\_id * [EDP]upgrade oozie Web Service API version of oozie engine * Enable HDFS HA in Cloudera plugin * Made 'files' dict as member field of ClusterStack * Changed all stacks retrieval with filtered search * Removed useless ClusterStack class from heat engine * Removed useless 'Launcher' class from heat engine * Cluster creation with trust * Add default templates for Spark 1.3.1 * Add Zookeeper and Sentry in CDH540 scenario tests * Fix README.rst in scenario dir * Fix installing python-saharaclient * Deprecate Spark 1.0.0 * Switch to the oslo\_utils.fileutils * Added failed thread group stacktrace to logs * Updated from global requirements * [CDH] Provide ability to configure gateway configs * Remove the old scenario YAML files * Derive Mako scenario templates from the current YAMLs * Improvement check scale in scenario tests * Allow multiple clusters creation * Modify launch\_command to support global variables * Allow Mako templates as input for scenario test runner * Updated from global requirements * Allowing job binary objects to be updated * Resolve 500 error during simultaneously deletion * Fix retrieve\_auth\_url in case Keystone URL does not contain port * Spark job for Cloudera 5.3.0 and 5.4.0 added * Fix problem with using volumes for HDFS data in vanilla plugin * Fix failed unit tests * Added support of Drill 0.9 to MapR plugin * Added support of Drill 0.8 to MapR plugin * Added support of HBase 0.98.9 to MapR plugin * Added support of Hue 3.7.0 to MapR plugin * [EDP] Delete edp job if raise exception * add unit test covering cancel\_job in job\_manager * Remove un-used "completed" filed when do cluster\_provision\_step\_add * Allow to specify auto\_security\_group in default templates * Add check for cinder in scenario tests * [HDP] Nameservice awareness for NNHA case * Return back devstack exercise to in-tree plugin * Fix devstack plugin - sahara repo already cloned * Add py34 to envlist * Remove bin/ scripts support from in-tree devstack plugin * Enable all plugins in devstack code * Add CM API support for enable hdfs HA * Add bashate check for devstack scripts * Updated from global requirements * Add in-tree Devstack plugin * Support Spark 1.3.1 * Updated from global requirements * Minor - move definition to avoid AttributeError * [EDP] Unified Map to Define Job Interface * Enable Java Keystore KMS service in CDH5.4 * [EDP][Oozie] external hdfs missed for URLs in job\_configs * Fix compatible issues in unit tests for python 3 * Use right oslo.service entry points * Updated from global requirements * pass environment variables of proxy to tox * Switch to oslo.service 3.0.0b1 ------- * Updated from global requirements * Add sentry check for CDH 5.3 * Allowing data souce objects to be updated * Updated from global requirements * Add method for geting instances with process * Add CDH5.4 support in sahara * Add support of custom scenario to scenario tests * Added method for connect to node and run command * Update version for Liberty 3.0.0a0 ------- * Removed dependency on Spark plugin in edp code * Removed unused filtering in get\_plugins * Refactor exception is Sahara * Removed HashableDict * Updated from global requirements * Spark doc references vanilla diskimagebuilder page * Also install alembic\_migration folder * Remove duplicate 'an' and 'the' in docs * Add policy namespace change to the upgrade guide * Transform configuration values into int or float when needed * Updated from global requirements * Add cinder volumes to mapr scenario template * Modifying Swift Paths for EDP Examples * Updated from global requirements * Fix problem with removing PID from list * Remove deprecated group name of option * Remove WritableLogger wrapper * Updated from global requirements * [CDH] Load missed configs from yarn-gateway.json * Switched from all stacks polling to filtered list * Disable neutron DEBUG logs * Fixed typo in the Oozie CL documentation * Move cluster deletion to common engine module * Fix Typo Error for "Cloudera" * Don't use reduce for python3 compatibility * Making policy namespaces more unique * Switched Heat engine to ResourceGroup use * Add "null" for fields in cluster and node group template JSON schemas * Minor - Fixed wrong log formatting * Hiding volumes from cluster output * Minor improvement of validation rules * [CDH] Add validation check about dfs\_replication * Update list of supported API versions * Fix issue with configuring HDP cluster * Add updating jobs statuses before cluster deletion * Added missing retries of clients calls * Adding retry ability to cinderclient calls * Fix typo in Sahara doc * Added checking of event-log in scenario tests * Print traceback in logs for cluster operations * Update the docs about how to build images for Sahara usage * Fix logging\_context\_format\_string input for sahara * Added validation of template names in scenario tests * Adding retry ability to heatclient calls * Enabling Swift client retries * Adding retry ability to keystoneclient calls * Adding retry ability to novaclient calls * Adding retry ability to neutronclient calls * Remove the custom duplicate check on cluster template update * Fix cluster templates update * Fix MapR Oozie dependency resolution * Fix usage volume type in Heat * Adding ability to retry clients calls * Remove resetting self.flavor\_id in CDH test * Remove custom duplication check for node group template update * Fixed bug with volume type validation * Improve unit tests of general utils * Implementation of Storm scaling * Adding yaml scenario file for Mapr 4.0.2 plugin * Added support of Oozie 4.1.0 to MapR plugin * Use PyMySQL as MySQL DB driver for unit tests * Extra tests for quotas * Deprecate the Direct Engine * Updated from global requirements * Improve compatible with python3 * [HDP] java64\_home not pointing at default-installed JDK for plugin * Fixed logging issues with missing ids * Add support of Mapr FS to scenario tests * Updated from global requirements * Use keystone session in new integration tests * Fix logging\_context\_format\_string input for sahara * Implemented support of placeholders in datasource URLs * Drop use of 'oslo' namespace package * Added support of Pig 0.14 to MapR plugin * Remove sqlalchemy-migrate from test-requirements * Session usage improved in sqlalchemy api * Increase edp module test coverage * Added unit tests for service/api module * Improved unit test coverage of poll\_utils * Updated from global requirements * Fix delete volume and improved conductor coverage * Improved coverage for workflow\_creator * Test coverage improvement for cluster\_progress\_ops * Test coverage improvement for sahara.service.networks * Make configurable timeouts in scenario tests * Storm EDP implementation * Fix InvalidRequestError being skipped * Remove unused code from sqlalchemy api module * Add unit tests for exceptions module * Improved unit test coverage of periodic module * Fix management IPs usage * Test coverage improvement for sahara.service.engine * Improve unit test for HashableDict * Improved test coverage for utils/resources * Change ext-2.2.zip url * Adding basic bandit config * Cleanup sqla custom types * Finally drop XML REST API related code * Improve unit test for utils/edp.py * Event log supported in new integration tests * Use ThreadGroup instead of separate threads * made a change to upgrade guide * Use correct config\_helper in Vanilla 2.6 * Add sahara\_service\_type support for auth for sahara * Updated from global requirements 2015.1.0 -------- * Add links to the public place with prepared images * Add links to the public place with prepared images * Fixing log messages to avoid information duplication * Adding cluster, instance, job\_execution ids to logs * Support x-openstack-request-id * Removing unused methods from utils.openstack.\* * Release Import of Translations from Transifex * [CDH] swift lib support * Fix slow unit test * Adding .to\_wrapped\_dict to node\_group\_template update * update .gitreview for stable/kilo * Remove duplicated codes in CDH plugin * Add scenario yaml file for fake plugin * Add handler for configuration w/o sec groups * Updated from global requirements * Minor refactor of the integration service test * Added check of scaling for Spark plugin * Install Oozie UI on MapR clusters * Adding config hints for CDH plugin * Add a brief description of the default template mechanism * Put in Sahara repo actual scenario files * Use jsonutils from oslo.serialization * Add CDH template for the scenario integration test * Restrict cluster to have at most one secondary namenode * Adding config hints for vanilla plugin * Adding config hints for HDP plugin * Add hacking checks related to logging guideliness * Date format set to be correct utc date * Fix strange check in code * Rename templates in scenario yaml files 2015.1.0rc1 ----------- * Updating edp json examples * Updating the developer quickstart guide * Updating sahara-ci readme * Updating edp-java readme * Updating wordcount readme * Updates to the EDP doc * Updating installation guide * Updating features documentation * Add Sahara log guideliness * Updated from global requirements * Adding documentation for guide pages in horizon * Fix libevent and epel install on MapR * Update EDP doc * How to build Oozie docs updated * Update Cloudera plugin docs * Update statuses docs * Update vanilla plugin doc * Update jenkins doc * Open Liberty development * Update Sahara 'How to Participate' doc * Update overview.rst * Update Plugin SPI doc * Update doc for adding database migrations * Add docs for event log usage * Implement cluster creation with 'quotas = unlimited' * Update testing page in developer docs * Update development.environment.rst * Update launchpad.rst * Updating advanced configuration guide * Updating EDP SPI doc * Replace current API docs with new Sahara API docs * Migrate to oslo.policy lib instead of copy-pasted oslo-incubator * Validate node groups without volumes * Updating upgrade guide documentation * Updating EDP doc * Updating configuration guide documentation * Fix mailing list in feature requests info * Add unit-tests for new integration tests * Leverage dict comprehension in PEP-0274 * Fixed issue with waiting for ssh of deleted cluster * Default templates for MapR * Default templates for CDH * Default templates for Vanilla * Default templates for Spark * Add unit tests for default templates update functionality * Add unit tests for default templates delete functionality * Add unit tests for default templates utils * Default templates for HDP * Add a CLI tool for managing default templates * Add validation in new integration tests * Adding run time of tests * Add missed configs for ThriftJobTrackerPlugin * Minor - allow changing status description of deleting cluster * Updating horizon user guide to use new terminology * Docs updated with instance locality feature * Fix common misspellings * Add usages of poll util for service modules * Switched heat engine from JSON to HOT * Set cluster mode on every node * Adding plugin version information to scenario test report * Documentation for scenario tests * Set up network client for tempest client tests * Implement job-types endpoint support methods for MapR plugin * Drop support database downgrades * Add information about cluster state in test report * Fix topology awareness configuration * Add new log messages where it's needed * Add integration tests for scaling in Spark * Updated from global requirements * Generate random password for CM admin user * Add get and update user APIs * Add scenario files for new integration tests * Fix order of arguments in assertEqual - Part1 * Notify Kerberos and Sentry do not take effect * Raise the default max header to accommodate large tokens * Sync with latest oslo-incubator * Replace direct http requests by sahara client in Quick start guide * Add usages of plugin poll - part 1 2015.1.0b3 ---------- * Fix log import error in tempest tests for Sahara * Remove the sahara.conf.sample file * Add usages of plugin poll - part 2 * Apply event-log feature for HDP plugin * Implement job-types endpoint support methods for Fake plugin * Update MapR plugin docs * MapR validation rules fixed * Fix order of arguments in assertEqual - Part3 * Fix order of arguments in assertEqual - Part2 * Implement job-types endpoint support methods for CDH plugin * Implement job-types endpoint support methods for Spark plugin * Implement job-types endpoint support methods for Vanilla plugin * Add Spark support for MapR plugin * Install MySQL JDBC driver along with client * Default version update for vanilla integration test * Implement poll util and plugin poll util * Minor - misprint corrected * Imported Translations from Transifex * Move updating provision progress to conductor * Add usages for step\_type field * HDP plugin: Fix Beeswax error when starting Hue * Replace empty list with scalable process in scaling * Add missed translation for exceptions in versionhandler * Switch to v2 version of novaclient * Add support for MapR v4.0.2 * Changing method for verifying existence of cinder * [HDP] Add validation check for dfs.replication * Take back upstream checks for import order * Rewrite malformed imports order * Node Groups now have id field * Update the docs for CDH plugin userdoc and image-builder doc * Add an is\_default field to cluster templates and node group templates * Move cluster template schema definition to is own file * Added support of instance locality to engines * Rewrite log levels and messages * Move node group template schema definition to its own file * Add Sentry service test in cdh plugin integration test * Add transient checks support in scenario tests * Change imports after moving tempest common code * Add Hue support for MapR plugin * Skip job\_execution tempest client test * Add a common HBase lib in hdfs on cluster start * Take back upstream checks for commit message * Imported Translations from Transifex * HDP plugin: Fix Bash error when starting Hue * Adding barbican client and keymgr module * Fix tempest tests for Sahara * Updated from global requirements * Adding CDH to the list of default plugins * Added volume\_local\_to\_instance field support * [EDP][Spark] Configure cluster for external hdfs * Add validation for cluster templates update * Implement job-types endpoint support methods for HDP plugin * Add job-types endpoint * Changed heat engine to work with objects * Implemented multi-worker solution for Sahara API * Changed wrong value for total during step creation * Adding additional validation to node group template edit * [EDP] Add Oozie Shell Job Type * check solr availability integration testing without add skip\_test * Add validation for node group templates update * Add Impala service test in cdh plugin integration test * Applying event log feature for CDH - part 3 * Imported Translations from Transifex * Updated from global requirements * Refactoring methods for terminating * Apply event-log feature for Vanilla plugins * Add Impala support for MapR plugin * Add Solr service test in cdh plugin integration test * Add Sqoop support for MapR plugin * Add CM API lib into CDH plugin codes * Fix some translator mistakes * Adding ability to edit cluster templates * Removing alpha warning on distributed mode * Add missed files for migrations in MANIFEST.in * Fix indent miss caused by f4138a30c972fce334e5e2a0fc78570b0ddb288b * Applying event log feature for CDH - part 2 * Applying event log feature for CDH - part 1 * Add support of several scenario files in integration tests * Provide ability to get events directly from cluster * Add Key Value Store service test in cdh plugin integration test * Fix tempest client tests in Sahara * Remove unused field in job\_execution table * Collect errors in new integration tests * Add Drill support for MapR plugin * Minor - changed name of argument in mirgation tests * Minor - Added missing check for 'Deleting' state * Add support for oslo\_debug\_helper to tox.ini * Remove unused code (timed decorator) * Updated from global requirements * Add bare images support for MapR plugin * Add concurrency support in new integration tests * Add provisioning steps to Storm plugin * Adding ability to edit node group templates * Updated from global requirements * Fix transient cluster gating * Add Flume support for MapR plugin * Fixed format mapping in MalformedRequestBody * Reorganized heat template generation code * Add check to integration tests to check event-log * New integration tests - EDP * Add provision steps to Spark Plugin * New integration tests - scaling * New integration tests - base functional * Make status description field more useful * Imported Translations from Transifex * Updated from global requirements * Added periodic clean up of old inactive clusters * Refactor MapR plugin for Sahara * Add missing database updates for cluster events * Add option to disable event log * Fix problems with provisioning steps * Removed error log for failure inside individual thread * Add Sqoop service test in cdh plugin integration test * Add Flume service test in cdh plugin integration test * Updated from global requirements * Add ability to get cluster\_id directly from instance * Changing zookeeper to not use version number * Adding validation check for Spark plugin * [Vanilla2] Open ports for hive * Improve messages for validation * Add impala shell solr package in the cdh plugin * Add efficient method for detecting installed packages * Adding hacking check to prevent old oslo namespace usage * Refactor event-log code * Imported Translations from Transifex * Updated from global requirements * Config parameters beginning with "oozie." should be in job properties file * Add resource quota checks for clusters * Fixed bug with spark scaling * Remove obsolete oslo modules * Remove obsolete exceptions module * Adding missed oslo import change * Separate the codes of CDH5 and CDH5.3.0 * Initialize MQ transport only once * Removing service.engine.\_log\_operation\_exception 2015.1.0b2 ---------- * Using oslo\_\* instead of oslo.\* * Added documentation for indirect VM access feature * Updated from global requirements * Fixed unit tests failures caused by missing patch stops * Updated sample config after oslo messaging update * Add Swift integration with Spark * Using oslo context as context-storage for logs * Waiting should depends on cluster state * Open port 8088 for HDP 2.0.6 * Add indirect VMs access implementation * Remove log module from common modules * Specify the package name when executing Java type edp jobs * Fixed minor errors in Sahara DB comments * Drop cli/sahara-rootwrap * Add provision step to Heat engine * Make vanilla 2.4.1 plugin deprecated * Add CDH configuration in itest.conf.sample-full * Add swift and mapreduce test after scaling in cdh integration test * Add ability to search images by name * Fix getting not registered images * Add HBase service test in cdh plugin integration test * Spark Temporary Job Data Retention and Cleanup * Updated from global requirements * Update threadgroup oslo-incubator module * Update log oslo-incubator module * Fix incorrect s/oslo/sahara/ in \_i18n * Migrate to oslo.log * Refactoring datasource, job and job\_binary name validations * Updated from global requirements * Removed EXTRA\_OPTS tuning from devstack configuration * Follow the argument order specified in spark-submit help * Change CDH plugin Processes Show\_names * Updated from global requirements * Add edp.java.adapt\_for\_oozie config for Java Action * Fix getting heat stack in Sahara * Add cleanup in the integration test gating file * fix Direct engine moves cluster to "Scaling" twice * Updated from global requirements * Refactoring swift binary retrievers to allow context auth * Add integration test for Hive on vanilla2 * Add context manager to assign events * Drop uuidutils * Add refactor to Vanilla 1.2.1 * Removed unused variable from tests * Removed sad line * Imported Translations from Transifex * Fixed context injection for RPC server * Remove useless packages from requirements * Add provisioning steps to Direct Engine * Added endpoint and utils to work with events * Enable auto security group when Bug 1392738 is fixed * Fixed issues in docs * Adding hive support for vanilla 2.6 * Use pretty-tox for better test output * Adding usage of "openstack.common.log" instead of "logging" * Updated from global requirements * Add options supporting DataSource identifiers in job\_configs * Removing warnings in the MAPR doc plugin * Hide oslo.messaging DEBUG logs by default * Add integration tests for transient clusters * Move to hacking 0.10 * Use HDFS parameter to inject swift info * Added ability to listen HTTPS port * Added ability to use other services via HTTPS * Updated from global requirements * Enable 5.3 version choice in cdh plugin * Updated from global requirements * Updated from global requirements * fix the edp and hive test issue for CDH5.3 * Refactor db migration tests * Imported Translations from Transifex * Fixes a job\_configs update by wrong value when deleting proxy-user * Adding Storm entry point to setup.cfg * Cleaned up config generator settings * Extracted config check from pep8 to separate env * Fixed topology parameters help in config * Fixed pep8 after oslo update (01/06/2015) * Renamed InvalidException to InvalidReferenceException * Mount volumes with options for HDFS performance * Fixed vanilla1/2 cluster not launched problem * Increase RAM for CDH master processes in CDH IT * Minor refactoring integration tests * Migrate to oslo.concurrency * Adding ability to access context from openstack.common.log * Fixed hdfs mkdir problem in vanilla1 * Add Java type edp test in integration test of CDH plugin * Enable more services in CDH plugin * Adding database detection to migration tests * Fixed pep8 after keystoneclient upgrade * Added validation on proxy domain for 'hiveserver' process * Fix oslo.db import due to move out of the namespace package * Updated from global requirements * Add one more sample for pig job examples 2015.1.0b1 ---------- * Imported Translations from Transifex * Updated from global requirements * Use xml.dom.minidom and xmlutils in unit tests * Saharaclient tests for tempest * Enable HDFS NameNode High Availability with HDP 2.0.6 plugin * All user preserve EDP objects after test * Migrate to oslo.context * Use first\_run to Start Services * Removing unecessary check * Adding Hadoop 2.6.0 support to Vanilla plugin * Fixed configs generation for vanilla2 * Fixed auto security group for nova network * Updated from global requirements * Fixed subprocess error reporting * Fixed scaling with new node group with auto sg * Update oslo-incubator periodic\_task * Update oslo-incubator threadgroup * Update oslo-incubator policy * Update oslo-incubator log * Update oslo-incubator lockutils * Removed \_i18n module, it is not used directly * Updated from global requirements * Update conf sample after oslo.messaging release * Workflow documentation is now in infra-manual * Disabled requiretty in cloud-init script * Storm integration * Fixed Fake plugin for Fedora image * Update plugin descriptions * Add integration test for Hive EDP job * [CDH] Add validation for spark * Support searching job executions by job status * Don't provide CONF to the AuthProtocol middleware * Inherit Context from oslo * Sync latest context module from oslo-incubator * Specify CDH version * Add CDH plugin documents * Added get\_open\_ports description to plugin SPI * Add list of open ports for Spark plugin * Open all ports for private network for auto SG * [CDH] Convert node group config dict * Add test for DB schema comparison * Adding uuids to exceptions * Add db/conductor ops to work with new events objects * Add new events objects to Sahara * Fix broken unit tests * changes to quickstart * Remove py26 from tox * Fixed error on attempt to delete job execution several times * Added hive support for vanilla2 * Support searching job executions by cluster name and job name * Sample JSON files for Sahara EDP APIs * Updated from global requirements * Added checks on deleting cluster * small change to edp\_spi * small change to diskimagebuilder file * Support query filtering for cluster objects * Support query filtering for templates and EDP objects * Enable auto security group for vanilla integration tests * Updated from global requirements * Format volumes filesystems in parallel * Correcting small grammatical errors in logs * Imported Translations from Transifex * Replacing data\_processing with data-processing * Updated from global requirements * Pylint check was broken after pylint update * Refactoring integration tests for Vanilla 1 plugin * Fix for getting auth url for hadoop-swift * Fixed bug with Hive jobs fail * Fixed pep8 after oslo.db config update * Add HBase support to CDH plugin * Add ZooKeeper support to CDH plugin * Fixed auto security group cleanup in case of creation error * Adds doc to devref quickstart document * Add list of open ports for HDP plugin * Fixed trunk pep8 errors * Disable all set of tests (every plugin) by default * Print Cloudera manager logs if integration test failed * Added ability to access a swift from vanilla-1 hive * change to devstack.rst * corrected error in dashboard\_user\_guide * corrected error in overview.rst * corrected error in vanilla\_plugin.html * Add list of open ports for Cloudera plugin * Imported Translations from Transifex * Remove unused class and arguments * Updated from global requirements * Remove oslo-incubator's gettextutils * Drop obsolete oslo-confing-generator * Add link on Hue Dashboard for CDH plugin * Explicitly specifies cm\_api version in CDH plugin * Fixed job execution update in case of proxy command * Removing Swift container support for job binaries * Fixed cluster scaling in distributed mode * Auth policy support implementation * Fix working EDP jobs with non-string configs * Fix vanilla test\_get\_configs() for i386 * Added ability to launch jobs on fake plugin * Fix Cloudera plugin with CDH packages < 5.2.0 * typo found on Sahara Cluster Statuses Overview * Fix bugs on doc registering an image * Fix bugs on Sahara overview * Fix bug on features.rst doc * Fix bug on diskimagebuilder.rst * Make proxy command generic and user-definable * Add checks in fake plugin * Add scaling opportunity for fake plugin * Imported Translations from Transifex * Install ExtJS library for CDH plugin * Fix bug on Sahara UI Dev Environment Setup * Fix dict iteration usage * Fixing validation exception for valid security group * Remove explicit set of CONF.os\_region\_name in mapr plugin tests * Correcting error in NeutronClientRemoteWrapper.\_get\_adapters * Drop some obsolete oslo-incubator modules * Fix 'Clock Offset' error in Cloudera Manager * Add Spark support to CDH * Add missed translations * Added cancel before deleting job execution * Grouped EDP endpoints by type * changes to features.rst * change to edp.rst * Flush netcat socket buffer when proxying HTTP connections * Add Hue support to Cloudera plugin * Add hash to auto security group name for uniqueness * Invalid JSON in quickstart guide * Fix argument list in NeutronClientRemoteWrapper * Fix security groups * MapR plugin implementation * Fix old style class declaration * Imported Translations from Transifex * Fix quickstart guide * Drop obsolete wsgi and xmlutils modules * Add Hive support to CDH plugin * Fix parallel testing EDP jobs for Fedora and CentOS images * Small refactoring of get\_by\_id methods * Use oslo.middleware instead of copy-pasted * Sync with oslo-incubator and removing excutils * Updated from global requirements * Adds openSUSE support for developer documentation * MapR FS datasource * Add volume type support to sahara * Correct parameter name in integration tests * Updated from global requirements * Updated from global requirements * [DOC] Add notes on disabling permissions for Data Processing * Fixed problem with canceling during pending * Remove Vanilla 2.3 Hadoop * Support Cinder availability zones * Add bashate checks * [DOC] Added multi region deployment to features list * Use new style classes everywhere * [DOC] Fixed link from upgrade guide to installation guide * [DOC] Fixed broken list in edp.spi doc * [DOC] Minor change - replaced external link with internal * [IT] Fix deleting transient cluster when cluster in error state * Fix bashate errors * Imported Translations from Transifex * Updated from global requirements * Moved exceptions.py and utils.py up to plugins dir * Adding support for oslo.rootwrap to namespace access 2014.2 ------ * Fix HDFS url description, and other various edits * Remove line saying that scaling and EDP are not supported for Spark * Description of job config hints in new doc page is wrong * Removing extraneous Swift information from Features * Update the Elastic Data Processing (EDP) documentation page * Add documentation on the EDP job engine SPI * Imported Translations from Transifex * Fix working Spark with cinder volumes * Fix scaling with Heat and Neutron * Fixed volumes configuration in spark plugin * Fixed cinder check for non-admin user * Make versions list sorted for Vanilla and HDP * Imported Translations from Transifex * Fix working Spark with cinder volumes * Fix scaling with Heat and Neutron * Support Cinder API version 2 * Parallel testing EDP jobs * Fix HDFS url description, and other various edits * Fixed cinder check for non-admin user * Support Nova availability zones * Remove line saying that scaling and EDP are not supported for Spark * Description of job config hints in new doc page is wrong * Removing extraneous Swift information from Features * Update the Elastic Data Processing (EDP) documentation page * Add documentation on the EDP job engine SPI * Fixed volumes configuration in spark plugin 2014.2.rc1 ---------- * Add links for Spark images * Use $((EXPRESSION)) instead of $[EXPRESSION] * Open Kilo development * Sahara UI panels configuration docs updated * Updating RDO installation documentation * Update custom hacking checks * Update CONTRIBUTING.rst * Added docs for running Sahara in distributed mode * Removed mentions of Sahara Dashboard * Adding Spark to the list of default plugins * [DOC] Changed feature matrix for Spark * Fixed broken pep8 after keystone update * Adding job execution examples to UI user guide * Updating Hadoop-Swift documentation * Add CDH plugin in plugin availability matrix (userdoc) * Updated from global requirements * Add devref/devstack to docs index * Adding links for Juno Fedora images * [DOC] Removed feature matrix for heat engine * Image building docs updated * Updated REST API documentation * Update links for plugin images * [DOC] Made disk image builder docs more accurate * [DOC] Made EDP requirements plugin specific * [DOC] Switched docs from answers.launchpad.net to ask.o.o * [DOC] Fixed deprecated config style in devstack instruction * Adding missing CDH resources to MANIFEST.in * [Vanilla] Increased security of temporary files for db * Changed hardcoded 'hadoop' hdfs user name to template * Use 'auth\_uri' parameter from config * Changing Hadoop to "Data Processing" * Updating documentation for overview/details * Imported Translations from Transifex * Add pip-missing-reqs tox env * Add genconfig tox env * Fix typo in CDH description * Updated from global requirements * [DOC] Minor change - added missing period * Add entry for Yevgen Runts to avoid dup author * Add entry for Sofiia to avoid dup author * Add entry for Andrey Pavlov to fix author name * Add entry for Kazuki Oikawa to avoid dup authors * [DOC] Removed note about SAHARA\_USE\_NEUTRON in sahara-dashboard * Imported Translations from Transifex * Imported Translations from Transifex * Fixed descriptions for db migrations * Fixed example of hadoop versions return in plugin SPI * Removed remaining 'swift-internal' prefix * Add missed translations at service/validations/edp * Remove direct dep on oslo-incubator jsonutils * Sahara-Dashboard docs updated * Imported Translations from Transifex * Refactoring HDP plugins to allow multiple Zookeeper servers * Updated from global requirements * Added information about sahara settings to cluster * Fixed the localrc file for enabling swift services * Fixed terminate\_unneeded\_clusters fail in distributed mode * Default value of 'global/namenode\_opt\_maxnewsize' should be 200m * Adding documentation for proxy domain usage * Removed attempt to ignore tests in pylint * Remove direct dep on oslo-incubator timeutils * Update oslo processutils module * Update oslo lockutils module * Update oslo log module * Update oslo jsonutils module * Sync oslo strutils module * CDH manager-node flavor change * Add use of nova\_kwargs for nova servers create to improve readability * Imported Translations from Transifex * Renamed pylintrc to be found by pylint * Made link to devstack installation internal (instead of external) * Moved validate\_edp from plugin SPI to edp\_engine * Install packages for CDH plugin without their starting * Install non deprecated DB for Cloudera Manager * Added missed translation for service.edp.spark * Adding a periodic task to remove zombie proxy users * Refactoring DataSources to use proxy user * Updating JobBinaries to use proxy for Swift access * Adding trust delegation and removal for proxy users * Adding proxy user creation per job execution * Adding configuration and check for proxy domain * Migrate to oslo.serialization * Renamed missing 'savanna' tags to 'sahara' * Fix cluster creation with heat engine * Update sahara.conf.sample * Imported Translations from Transifex 2014.2.b3 --------- * Imported Translations from Transifex * Fixed typo in integration tests error handling * Add warn re sorting requirements * Add spark to toctree on doc index page * Fix doc issues * Add doc8 tox env * Replaced range with six.moves.range for significant ranges * Removed comment about hashseed reset in unit tests * Allowed to specify IDs for security groups * Switched anti-affinity feature to server groups * Moved get\_oozie\_server from plugin SPI to edp\_engine * Moved URI getters from plugin SPI to edp\_engine * Updated docs with security group management feature * Minor change - removed unnessary parentheses * Added translation for CDH plugin description * [HEAT] Fixed rollback error on failure during scale down * Implemented get\_open\_ports method for vanilla hadoop2 * Added ability to create security group automatically * Catching all connection errors in waiting HDP server * Make starting services in Vanilla 2.4.1 parallel * Add notifications to Sahara * Fix help strings * Updated from global requirements * Waiting connect cloudera agents to cloudera manager * [HDP1.3.2] Fixed bug with decommissioning cluster * Imported Translations from Transifex * Remove host from CDH cluster after decommissioning * Enable swift in IT for CDH by default * Documented heat engine backward compatibility break * Use Vanilla 2 plugin for transient checks * Use auth\_token from keystonemiddleware * Updated from global requirements * Fix updating include files after scaling for vanilla 2 plugin * Add EDP IT after scaling for vanilla 1 plugin * Make Vanilla 2.3.0 plugin deprecated * Imported Translations from Transifex * Adjust RESTAPIs convert-config w/suggests from SL * Removed sqlite from docs * Removed support of swift-internal prefix * Removed one round trip to server for HDFS put * Added create\_hdfs\_dir method to oozie edp engine * Made EDP engine plugin specific * Do not rely on hash ordering in tests * Fix some of tests that rely on hash ordering * Fix jsonschema>=2.4.0 message assertion * Fixed wrong use of testtools.ExpectedException * Fix using cinder volumes with nodemanager in HDP2 * Correction of words decoMMiSSion-decoMMiSSioning * Add tests for ops.py * Add Spark integration test * Fix starting instances after scaling for CDH * Improved error handling for provisioning operations * Fix parsing dfsreport for CDH in integration tests * Unit tests for CDH plugin * Imported Translations from Transifex * Updated from global requirements * Create etc/edp-examples directory * Fixed Exception failures caused by i18n * Add translation support to plugin modules * Imported Translations from Transifex * Remove unused parameter from CDH IT * Fix scale up cluster on CDH plugin with vanilla image * Fixed DecommissionError bug * Imported Translations from Transifex * Fixed bug with NotFoundException * Migration to oslo.utils * Imported Translations from Transifex * Fixed concurrent job execution with external hdfs * Update oslo.messaging to alpha/juno version * Update oslo.config to the alpha/juno version * Updated from global requirements * Move middleware package to api package * Imported Translations from Transifex * Removed a duplicate directive * Added ability to specify security group for node group * Fixed cluster rollback on scaling with heat engine * Fix closing HTTP session in Ambari plugin * Add test for storing data in DB for 007 migration * Group tests by class * Imported Translations from Transifex * Fixed a ValueError on provisioning cluster * Adding job execution status constants * Add a Spark job type for EDP * Fix put\_file\_to\_hdfs method in hdfs\_helper * Set python hash seed to 0 in tox.ini * Adding generic trust creation and destruction methods * Add oslo.messaging confs to sample config * Fixed logging about changes of cluster status * Add translation support to service and missed modules * Imported Translations from Transifex * Implement EDP for a Spark standalone cluster * Imported Translations from Transifex * Waiting deleting Heat stack * Integration tests for CDH plugin * Add CDH plugin to Sahara * Add rm from docs env to whitelist to avoid warn * Add translation support to service and utils modules * Migration to oslo.db * Imported Translations from Transifex * Removed extra work in case of no volumes * Add translation support to upper level modules * Adding sanitization for trusts in JobExecution model * Removed code duplication on cluster state change * Mark floating-IP auto-assignment as disabled also with Neutron * Updated from global requirements * Use with\_variant method for dialects db types 2014.2.b2 --------- * Delete migration tests for placeholders * Fixed bug with empty "volumes" when heat engine is used * Add support testing mr job without log checking * Migrate integration tests to oslotest * Append to a remote existing file * Fixed diction: VMWare should be VMware * Imported Translations from Transifex * Fix a auth\_uri cannot get in sahara-engine * Create an option for Spark path * Bump Hadoop to 2.4.1 version * Wrap eventlet's Timeout exception * Imported Translations from Transifex * Add support skipping EDP tests for vanilla 2 plugin * Update oslo-incubator db.sqlalchemy module * Update oslo-incubator threadgroup modules * Update oslo-incubator processutils module * Update oslo-incubator periodic\_task module * Update oslo-incubator network\_utils module * Fix creating cluster with Vanilla 2.4.0 plugin * Fixes failure to scale cluster adding new Hive or WebHCat service * Revert "Fix use of novaclient.exceptions.NotFound" * Renamed Pending to PENDING fixes bug 1329526 * Update oslo-incubator loopingcall module * Update oslo-incubator context module * Update oslo-incubator config.generator module * Update oslo-incubator lockutils module * Update oslo-incubator fileutils module * Update oslo-incubator log module * Fix scaling cluster Vanilla for Hadoop 2.3 * Updated from global requirements * Add vanilla plugin with Hadoop 2.4.0 * Fixed configuring instances for Vanilla 2.0 * Fix hardcoded username(ec2-user) for heat-engine * Fixed EDP job execution failure * Fix use of novaclient.exceptions.NotFound * Update oslo-incubator excutils module * Update oslo-incubator jsonutils module * Update oslo-incubator importutils module * Update oslo-incubator strutils module * Update oslo-incubator gettextutils module * Update oslo-incubator timeutils module * Allow plugins to choose the EDP implementation * Refactor the job manager to allow multiple execution engines * Use oslo.i18n * Add oslo.i18n lib to requirements * Update image registry docs to use cli * Imported Translations from Transifex * Remove docutils pin * Fixed hadoop keys generation in case of existing extra * Switched Sahara unit tests base class to oslotest * Update doc for REST endpoint convert-config * Extend status\_description column in Clusters tables * Updated from global requirements * Update docs to reflect the changes in security group section in horizon * Fix formatting in readme for vanilla configs * Added validation check for number of datanodes * Imported Translations from Transifex * Fix tools/get\_auth\_token * Corrected a number of pep8 errors * Changed HDP unit tests base class * Updated from global requirements * Fixed volumes mount in case of existing volumes * Adds DataNode decommissioning support to HDP Plugin * Refactoring vanilla 2 plugin * Fix docs to use sahara-all instead of sahara-api * Use immutable arg rather mutable arg * Upgrades the HDP plugin to use Ambari 1.6.0 * Fix detaching cinder volumes * Updated from global requirements * Upgrades the HDP plug-in to install Hue * Fixed number of hacking errors * Updated from global requirements * Small fixes in README migration file * Imported Translations from Transifex * Implement scaling for Spark clusters * Installation guide updated * Fix Sahara CI links * Fixed H405 pep8 style check * Updated from global requirements * Make deleting transient clusters safe * Fix docs for configuring authentication * Handle remote driver not loaded situation * Migrated integration tests to testtools * Remove vim editor configuration from comments * Fixed indent in testing docs * Updated from global requirements * Imported Translations from Transifex * Fixed E265 pep8 * Removed cluster retrieving in provisioning engine * Added new hacking version to requirements * Updated from global requirements * Hided not found logger messages in unit tests * Migrated unit tests to testtools * Sync up oslo log module * Fixed /etc/hosts update for external hdfs * Fixed status update for job execution * Update job execution status on cluster deletion * Fixed remote call in external HDFS configuration method * Remove usage of remote from HDP Instance constructor 2014.2.b1 --------- * Added jobhistory address config to vanilla 2 * Added secondary name node heap size param to vanilla plugin * Minor EDP refactoring * Update documentation for Spark 1.0.0 * Use in-memory sqlite DB for unit tests * Imported Translations from Transifex * Added several checks on deleted cluster to prevent error logs * Changing job excecution status to 'FAILED' in case of exception * Add Spark 1.0.0 to the version list * Rework keystone auth\_token middleware configs * [HDP] Integration tests for HDP 2.0.6 * Add Spark to overview and feature matrix * Documentation for the Spark plugin * Adding disconnected mode fixes to hdp plugin * [HDP] Changed test tag for HDP1 plugin * Made Swift topology optional for data locality * Add warn re alpha readiness of distrib mode * Updated from global requirements * Sync the latest DB code from oslo-incubator * Added ability to run HDFS service only with Hadoop 2 * Removed versions from Vanilla plugin description * Fixed oozie component name in HDP exception * Added validate\_edp method to Plugin SPI doc * Added validation for long hostnames * Add upgrade notes for sahara-api to sahara-all * Updated from global requirements * Replaced RuntimeErrors with specific errors * remove default=None for config options * Removed unused global var and unnessary param * Add Spark plugin to Sahara * Fix intermittent transient cluster tests failure * Synced jsonutils from oslo-incubator * Added validation check that network provided for neutron * Remove unused parameters in integration tests * Remove unused function from xmlutils * Fix typo: Plaform -> Platform * Fix working sahara with heat and nova-network * Removed unneeded check on job type during job execution * Add ".sahara" suffix automatically to swift URLs in workflows * Removed migration-time config folders lookup * Remove all mostly untranslated PO files * Made processes names case sensitive * replaced e.message * Remove monkey\_patch from test\_context * Fix hardcoded tenant name for job binaries * Imported Translations from Transifex * Run periodics in sahara-engine instead of sahara-api * Create trusts for admin user with correct tenant name * Imported Translations from Transifex * Updated from global requirements * Clean up openstack-common.conf * correcting the MANIFEST.in paths * correcting the MANIFEST.in paths * Extended plugin SPI with methods to communicate with EDP * Allow HDFS data source paths without the hdfs:// scheme * Improve validation for swift data source URLs * Imported Translations from Transifex * Updated from global requirements * Replaced the word components with component(s) * Updated from global requirements * Synced jsonutils from oslo-incubator * Split sahara into sahara-api and sahara-engine * [IT] More coverage of EDP in tests * Add sahara-all binary * Imported Translations from Transifex * Fix eventlet monkey patch and threadlocal usage * Change the package name of the example to org.openstack.sahara.examples * Imported Translations from Transifex * Fix running EDP job on transient cluster * Add simple fake plugin for testing * Imported Translations from Transifex * Moved information about processes names to plugins * Updated architecture diagram in docs * Forced lowercase for instance names * Improved validation for data-sources creation * Add upgrade doc stub page * Updated from global requirements * Add secondarynamenode support to vanilla 2 plugin * [IT] More coverage of EDP in tests * Add tenant\_id getting in integration tests * Added support of multi-region environment * [IT] Fixed error when skipping scaling test * Fixed validation of novanetwork w/o autoassignment * Avoid deleting transient cluster before job is started * Fixed wrong exceptions use for decommission errors * Implementing constants for the job types used by EDP * Change IRC channel name to #openstack-sahara * Imported Translations from Transifex * Remove IDH plugin from sahara * Fix storing binaries in Swift * Updated hdp\_plugin features to align with current capabilties * Saharaclient must be installed for UI to work in dev environment * Change links to images in Quick Start guide * REST API 1.1 corresponds to Icehouse as well * Updated validation section for Vanilla Plugin * Add \*.log files to gitignore * Fix up DevStack guide * Imported Translations from Transifex * Cleanup of docs for integration tests * Fix up Sahara UI installation guide * Updated from global requirements * Fixed wrong use of SaharaException * Update links for vanilla images in doc * Minor fixes to Sahara UI Installation Guide * Fix big job binary objects in mysql * Doc's update for integration tests * Removed possibility to run job w/o Oozie * Removed impossible branch of 'if' statement * Fix up installation guide * Add a custom filter method to scan wrapped dict results * Check that all po/pot files are valid 2014.1.rc1 ---------- * Add examples of upstream files that we should not change * Updating the setup development environment docs for icehouse * Update EDP requirements for hadoop v2 * Added rackawareness to Hadoop 2 in vanilla plugin * Do not document use\_identity\_api\_v3 in the sample-basic file * Add short info re testing * Reserve 5 migrations for backports * Compact all Icehouse migrations into single one * Added parameters to configure a list of node group processes * Add description to use IDH plugin with requests * Fixed tests failures when SKIP\_ALL\_TESTS\_FOR\_PLUGIN=True * Fix db management: don't autocreate db on start * Updating the vanilla image building docs * Add a page to the developer guide on Alembic migrations * Add a paragraph discouraging modification of upstream files * Open Juno dev * Update REST api docs * Updating dashboard user guide doc for icehouse * [IDH] Integration tests for IDH 3.0.2 * [IDH302] Restoring cluster parameters after scaling * Fix check active nodemanagers for vanilla 2 plugin * Heat docs update * Fix default repo links and tarball links for IDH * Add EDP integration tests for vanilla 2 plugin * Filter 'fields' from JobExecutions returned from REST api * Renamed 'idh' integration tests to 'idh2' * Standardize README header * Fixed wrong attached volume's names via Heat * Some configs updates for vanilla 2 plugin * Remove Mirantis copyright from README * Add EDP support for Vanilla 2 plugin * Add fixed and floating IPs discovery via neutron * Updated from global requirements * Change tag for vanilla integration test to 'vanilla1' * Remove agent remote * Fix parallel running integration tests with vanilla plugins * Fix transient clusters termination * Add note about OS\_TENANT\_\* to integration tests * Add integration tests for vanilla 2 plugin * Validate data sources reference different resources * Add transient tag to transient cluster test * Fix running integration tests by tag * [IDH] Fixed cluster scale down * Filter credentials in jobs returned from REST api * Fixed incorrect use of RuntimeError * Rename missed env variables in oslo code * Move swift configs to core-site.xml * Prepare integration tests for use for hadoop 2 * Imported Translations from Transifex * Updated from global requirements * Added missing lib to dev UI installation guide * Added python-pip installation to dev environment instruction * Rename strings in plugins dir * Missed renames in code base * Missed renaming in docs * Integration test for a transient cluster was added * Add Job History Server process to vanilla 2 plugin * Fixup 'savanna' references in run\_tests.sh * Override 'savanna' strings in openstack/common * Miscellaneous renaming string fixes * Change remaining references in the doc subdir * Change savanna references in top level docs * Completely remove etc/savanna dir * Move integration tests to python-saharaclient 0.6.0 * Imported Translations from Transifex * Change remaining savanna namespaces in setup.cfg * Change 'savanna' references in tools * Renaming files with savanna words in its names * Change remaining 'savanna' references in sahara/tests * Change "\_savanna\_" image properties to "\_sahara\_" * Keep python 3.X compatibility for xrange * Rename 'self.savanna' to 'self.sahara' in integration tests * Change the 'savanna-db' scheme to 'internal-db' * Changed Savanna to Sahara in documentation images * Move the savanna subdir to sahara * Replaced or removed Savanna words in comments * Replaced all Savanna words in class names * Renames all doc references from Savanna to Sahara * Update i18n config due to the renaming * Renamed all swift-dependent configs to sahara * [IDH] Initial documentation for IDH plugin * We're now using nove client >= 2.17.0 * [IDH] Fixed history server assignment * Fixed reference errors in docs * Update .gitreview to point on updated repo * Updated from global requirements * Cleanup openstack-common.conf * Updated from global requirements * Update oslo-incubator config module * Update oslo-incubator service module * Fixed typo in rollback function description * Make savanna able to be executed as sahara * Removed log message duplication * Update oslo-incubator context module * Update oslo-incubator processutils module * Update oslo-incubator periodic\_task module * Update oslo-incubator loopingcall module * Update oslo-incubator log module * Update oslo-incubator jsonutils modules * Update oslo-incubator importutils module * Update oslo-incubator excutils module * Update oslo-incubator gettextutils module * Update oslo-incubator common module 2014.1.b3 --------- * Fixed bug with unxpected stack delete * Minimal "lifetime" of transient cluster * Add cluster validation to vanilla 2 plugin * Add scaling support to vanilla 2 plugin * Removed EDP dependency on hive server * Updated from global requirements * Add swift support to vanilla 2 plugin * Use keystone v3 api by default * Add alias 'direct' for savanna/direct engine * Expand cluster-template usage validation message * Make decommissioning timeout configurable * Intial Agent remote implementation * [IDH] Added IDH 3.0.2 support * Updated features comparision heat with direct engine * Add Hadoop 2 vanilla plugin * Added scaling parameters to HDP plugin config * Removed EDP dependency on job\_tracker instance * [IDH] Added ability to support several versions * Fix scale down cluster * Updated from global requirements * Updated from global requirements * Fixed itests to work with new savannaclient * Changed get\_node\_groups to receive only one node process * [IDH] Removed copy-pasted test utility file * Added IDH plugin to savanna config * Replace service-specific exceptions with general (continuation) * Throw exception if get\_instance found several candidates * Added EDP test for HDP plugin * Improve help strings * Updated from global requirements * Added networks validation * Updated from global requirements * Replace assertEqual(None, \*) with assertIsNone in tests * Make savanna-db-manage able to discover configs * Filter credentials field in data\_sources returned from REST api * Expand swift data source credential tests * Fix non-deterministic a-a test * Add ability to support several versions vanilla plugin * Cinder test to integration tests was added * Replace service-specific exceptions with general * Speed up of Heat provisioning via Neutron * Hiding neutron Client class * Move client docs to python-savannaclient * Fix running IT for IDH plugin * Expand node-group-template usage validation msg * Auto generate and check config sample * Move REST API docs to separated dir * Standardize config sample locations * Fix how migration's cli register db connection opt * Delete 'links' only if it is present * Shorten swift-internal:// to swift:// * Add run\_test.sh for running tests * Attach volumes in parallel * Keep py3.X compatibility for urllib * Use six.moves cStringIO instead of cStringIO * Fix swift data source credential validation * Don't raise MySQL 2013 'Lost connection' errors * Add integration tests to Intel plugin * Fix cluster scaling in IDH plugin * Enable HDP 2 deployment leveraging HDP plugin * Filter credentials when returning job binaries through REST api * Add support retrying rest call in IDH plugin * Add userdoc install instructions for Fuel * Switch over to oslosphinx * Sort modules in openstack-common.conf * Rename Openstack to OpenStack * Use six.moves.urllib.parse instead of urlparse * Remove extraneous vim configuration comments * Fixed hadoop dir creation during hadoop-swift lib download * [IDH] Fixed cluster start without jobtracker service * Remove all support for "Jar" as a job type (alias for "MapReduce") * Further preparation for transition to guest agent * Add support for dotted job types * Remove compatibility code allowing "args" as dict * Fixed a small typo * Fix imports ordering and separation * Sync with global requirements * Make remote pluggable * Fix typo in savanna/tests * [IDH] Fixed cluster start without jobtracker service * Add utilities for supporting dotted job types * Remove extra Java job type fields from JobExecutions * Modify the REST doc to show a Java job type execution * Update the edp user doc to discuss "edp." configs for Java jobs * Move 'main\_class' and 'java\_opts' into edp.java configs * Default OpenStack auth port was changed * Sync with global-requirements * Refactored unit tests structure * Add integration test for streaming mapreduce * Add validation check for streaming elements on MapReduce without libs * Generate streaming tag in mapreduce job * Extract configs beginning with "edp." from job\_configs['configs'] * [DOC] Fixed link to oozie in docs * Add tag generation to mapreduce workflow * Imported Translations from Transifex * Separated "tests for utils" and "utils for tests" in unit tests * Remove kombu from requirements * [Integration tests]Deleted unnecessary underscores * Fixed HDP plugin to support Heat engine * Validation of job execution data should raise InvalidDataException * Update oslo-incubator db.sqlalchemy module * Update oslo-incubator py3kcompat module * Update oslo-incubator middleware.base module * Update oslo-incubator processutils module * Update oslo-incubator service module * Update oslo-incubator threadgroup module * Update oslo-incubator log module * Update oslo-incubator timeutils module * Update oslo-incubator gettextutils module * Small fix in development install guide * Fix nova client initialization arguments * Setup logging for wsgi server * Bump stevedore to >=0.14 * Fixed potential problems with global CONF in unit tests * Enable EDP on private neutron networks * Allow boolean "streaming" in Job JSON * Added more strict check for heat stack statuses * Updated from global requirements * Require "libs" for MapReduce and Java jobs and disallow "mains" * Fixed reading topology file with newline at the end * Fixed potential problems in test\_periodic.py * Add a config flag to disable cluster deletion after integration test * Add an hdfs data source example to the rest doc * Update Ambari Repo location and services refactoring * Fixed HDP plugin to support Heat engine * Updated from global requirements * Fixed typo in unit tests utility method * Removed underscore from valid symbols for names used as hostname * Made general name validation less strict * Add support deprecated db param in savanna-db-manage * Disable autocreating database when start savanna * Update install guide * Make error logging more safe * Added short doc about new Heat engine 2014.1.b2 --------- * Add integration test for Oozie java action * Updated from global requirements * Read Swift credentials from input\_data OR output\_data * Add alembic migration tool to sqlalchemy * Update EDP doc * Imported Translations from Transifex * [IDH] Added config controlling hadoop-swift.jar URL * [Vanilla] Updated docs to point to icehouse images * Change configs["args"] to be a list for Pig jobs * Ignore key/value pairs with empty keys in workflow generation * Add code to configure cluster for external hdfs * Imported Translations from Transifex * Add support for HBase in HDP plugin * Imported Translations from Transifex * Add missed i18n configs to setup.cfg * Enable check of Heat engine for Vanilla and HDP * Enable heat engine to launch cluster without keypair * Fix installation intel plugin * [IDH] Fixed work with cluster configs * Added 'oozie' service support to IDH plugin * Fixed wrong instance name with Heat engine * Added anti-affinity feature to Heat engine * Changed Vanilla plugin to use ports from config * Changed HDP plugin to use ports from config * Add util method to get port from address * Update sample savanna config * [Vanilla] Added unit test on get\_hadoop\_ssh\_keys method * Added cache for image\_username * Fixed cluster template with no nodegroups creation * Extract common part of instances.py and instances\_heat.py * Remove unused node\_group parameter in get\_config\_value * Minor exception text changes * Update oslo-incubator db.sqlalchemy module * Update oslo-incubator db module * Update oslo-incubator py3kcompat module * Update oslo-incubator service module * Update oslo-incubator gettextutils module * Update oslo-incubator timeutils module * Add Oozie java action workflows * Eliminate extra newlines in generated workflow.xml * Fix typos in edp integration test utility method name * Fix typo in error message * Fix typo in error message * Update oslo-incubator db module * Update oslo-incubator service module * Fix deleting cinder volumes * Properly catch timeout exception raised in thread * Added unit-tests to Heat engine * Fix mounting cinder volumes * Adding IDH plugin basic implementation * Reset CONF for topology\_helper and services during unit tests * Delete determine\_cluster\_config method from vanilla plugin * Fixed issue with undeleted instances * Integration tests related changes * Do not check the status of a job execution if Oozie id is None * Moved tests for general utils out of vanilla package * Node group handling improved in the db module * Update oslo-incubator processutils module * Update oslo-incubator loopingcall module * Update oslo-incubator periodic\_task module * Update oslo-incubator log module * Update oslo-incubator excutils module * Update oslo-incubator db.sqlalchemy module * Update oslo-incubator timeutils module * Wait for HDFS readiness after datanode services start * Increase timeout for Ambari server setup * Minor refactoring of vanilla create cluster * Fixed reporting about new cluster state * Changing oozie libs setup to manual copy * Removal of AUTHORS file from repo * Change "Jar" job type to "MapReduce" * Template names in integration tests were changed * Add generating new keypair to hadoop user in vanilla plugin * Removed cloud user private key pushing to nodes * Enable data locality for HDP plugin * Integration tests related improvements * Added heat service retrieving from keystone catalog * Fix getting cinder devices in heat * Remove properties from Object classes * Added 'gcc' to requirements in dev instructions * Launch integration tests with testr * Provisioning via Heat * Migrating to testr * Python client docs added * Docs in integration tests were updated * Sync requirements: pin Sphinx to <1.2 * Fix some typos in configs/messages * Integration tests have image related changes * Sync minor updates in oslo * Sync minor updates in oslo.db module * Add py3kcompat utils module * Oslo sync: make wait/stop funs work on all threads * Bump savanna client used for tests to >= 0.4.0 * Make infrastructure engine pluggable * Fixed link to how\_to\_build\_oozie page from index * Added savanna component to devstack installation instruction * Use stevedore for plugins loading * Enable cluster deployment with pre-installed JDK * Remove plugin from service/instances.py * Drop os.common.exceptions * Fixed wrong flavor validation * Use @six.add\_metaclass instead off \_\_metaclass\_\_ * Use six for iter keys/values 2014.1.b1 --------- * Remove missed call get\_plugin\_opts * There is no sense to keep py33 in tox envs * Added Neutron support to integration tests * Added missing default message in InvalidCredentials exception * Improved error handling in vanilla plugin * Fix getting hidden vanilla plugin parameters * Remove unused oslo libs * Remove unused plugins opts support * Fix typo in node group property documentation * Removed usages of uuidutils.generate\_uuid() * Revert "Support building wheels (PEP-427)" * Fixed bug when Oozie heap size is not applied * Add support for sqoop service in HDP plugin * Bump version to 2014.1 * Support building wheels (PEP-427) * Enable EDP with HDP plugin * Hacking contains all needed requirements * Replace unicode() with six.text\_type() * Fix auth url in swift * Remove check already released in hacking 0.8.0 * Fix style errors and upgrade hacking * Replace copy-pasted HACKING.rst with link * Upgrade openstack common from oslo-incubator * Convert to modern form of openstack-common.conf * Fixed Integration tests * Added json REST samples for edp * Changed use of images for integration tests * A timing/profiling utility for savanna * Changed use of flavors for Integration tests * Add support for cinder to HDP plugin * update installation guide * update guide document * Added check to string validations to skip non-strings * Add a general requirements section for guest images * Add Oozie building instruction * Set iso8601 logging level to WARN * Enable network operations over neutron private nets * Add a requirements section to the EDP doc * Fix web UI ports bug in vanilla plugin * Sync with global-requirements * Add missing flag to UI docs * Add support for oozie in HDP plugin * Added a check for Oozie configs * Remove duplicate retrieve\_auth\_url * Add support for Hive related services * Integration test for Swift has changes * Make 'ls' check threaded * Include Vanilla Plugin \*.sql files * Changed Integration tests * Docs for integration tests was added * Revert "Add link to centos image" * Add link to centos image * Fixed some warnings during doc building: * Decreasing integration test for cluster configs 0.3 --- * Use release version of python-savannaclient * Added REST API v1.1 section * Include the EDP Technical Considerations page in the EDP page * Add content to userdoc/edp.rst * Fix bug with auth\_token in trusts * Refreshed sample config files * Add lower bound for the six dep * Remove the section label markups for EDP 0.3.rc4 ------- * Use python-savannaclient 0.3.rc4 * Minor docs restructurization * Remove KeypairManager.get monkey patch * Update end time in job execution after job complete * Unconditionally monkey patch nova.keypairs.get * Changing SAVANNA\_URL to use v1.1 of the savanna-api * Integration test improvements * Enhance logging * Remove extra agrument from call of run\_job after cluster start 0.3.rc3 ------- * Use savanna client 0.3-rc3 * Add validations for name fields in all EDP objects * Replace DBError with DeletionFailed for DataSource and Job * Remove the 2.0 related version code * Fix the \_assert\_types test to allow for fields that are enums * Add \_\_init\_\_.py file to enable edp validation tests * Add roles to trusts creating * Fixed issue with wrong validation of jobs creation * First cut at UI documentation * Fix auth url retrieval for identity * Fix lost anti\_affinity field from Cluster Template * Add a page for documentation of the Savanna Python client * Another change to parallelize Vanilla plugin provisioning * Added EDP testing * Remove unused EDP JSON validation schemes to prevent confusion * Need to empty /tmp/\*-env.sh, before appending * config\_helper.py doesn't handle negative int correctly * Added data-locality feature description * Sync openstack common with oslo stable/havana * Move swift client to runtime requirements * Hide savanna-subprocess endpoint from end users * Docs for Cluster statuses * Added rack topology configuration for hadoop cluster * Add new EDP sections to the documentation tree 0.3.rc2 ------- * Configuring state hanging fixed * Fix database model for Job Binary * Bump savanna client version to 0.3-rc2 * Right import of modules in itests was made * Excessive log was deleted * Docs updated for image usernames * Close FDs for subprocesses * Follow hacking about import * Adding Denny Zhang to AUTHORS * Delete constant 'GENERAL\_CONFS' from config\_helper.py * Fixed typos in docs * Add back copy-pasted theme for Read The Docs only * Starting Job Execution in separate thread * Update stackforge links to openstack * Fix typos in userdoc * Add missing package dependency for test\_requirements.txt * Fix docs layout for Read The Docs * Fix version generation (pep8) * Update .gitreview file following repository move * Increase timeout for decomission operation * Sync with global requirements * Trusts for longrunning tasks * Allow job binaries to be retrieved from internal swift 0.3.rc1 ------- * Impl multitenancy support * Replace copy-pasted sphinx theme with oslo.sphinx * Improvements of integration tests * Add support for multiple HDP versions * Implement threaded SSH for provisioning and Vanilla plugin * Print request body when log-exchange flag is true * Add admin context for non-request ops * Migration to new integration tests * Added missed default configs for Oozie-4.0.0 * Removing line breaks from default configs 0.3a1 ----- * Add /jobs/config-hints/ REST API call * Add running hdfs operations from plugin specific user * Revert bump of alembic version * Integration test refactoring * Fix submitting hive job * Oozie manager enhancement * Bump oslo.config version to use Havana release * Refactoring job execution flow * Fix Cinder volumes support with xenserver * Doc fix for replacement of Hadoop version in Vanilla plugin * Add default sqlite db to .gitignore * Impl context.to\_dict() * Set default log levels for some third-party libs * Temporarily fixes bug #1223934 * Sync requirements with global requirements * Remove version pbr pins from setup\_requires * Enable swift integration * Edit doc for diskimage-builder * Get ambari mirror with curl instead of wget * Floating ip assignement support * Added validation for 'default\_image\_id' field for cluster create * Fixed wrong usage of SavannaException * Fix exception handling in Savanna subprocessing * Add horizon install instructions for RDO * Add pointer to userdoc from horizon guide * Add userdoc install instructions for RDO * Refactor job manager to isolate explicit references to job type * Fixed rep\_factor calculation in cluster shrink validation * Modify job\_configs fields to hold configs, args, and params * Docs fix for Neutron and Floating IP supprot * Docs fix for scaling * Fix Cluster Template name * Add direct dependency on iso8601 * Fixed output of --version command * Partial implementation for bug 1217983 * Filter out some vendor based input from a template upload * Replacement of Vanilla Hadoop 1.1.2 to Hadoop 1.2.1 * Fix print non unicode symbols in remote exception * Add complete paths in MANIFEST.in * Added job status update and hook for transient cluster shutdown * Configuration token replacement is incorrect for some topologies * Don't use ModelBase.save() inside of transaction * Fix random fails of unit tests * Add "mains" and "libs" fields to JobOrigins * Wrapping ssh calls into subprocesses * Partial resolution to bug 121783 * Fix AUTHORS file * Sync oslo with os/oslo-incubator * Sync requirements with os/requirements * Use setup.py develop for tox install * Update ambari admin credentials for scaling * Fix typo * Update Ambari repo URL for 0.2.2 release * Fix job manager for hive action * Documentation about HDP plugin * Add an ability to configure a job * Fix developer install guide from horizon * Add Hive + MySQL configuration * Fix create cluster with cinder * Remove an unncecessary loop from validation code * Get rid of headers in context * Added corrections to the documentation * Use api v1.1 for integration tests * Add hive workflow creator * Move Babel from test to runtime requirements * Remove failing on sqla 0.7.X assert * Make model\_base work with sqla 0.7.X * Sync requirements with global-requirements * Docs update for Neutron support * Added Hive configuration, new nodeprocess - hiveserver * Get rid of pycrypto dep * Fix "Broken Cinder Volume" * Neutron support * Fixed typo in development quickstart guide * Extend JobBinary REST api to allow retrieval of raw data * Added job execution after cluster start and operation for job execution * Oozie + MySQL configuration * Enable the scaling up of nodes in a cluster * Install configs to share/savanna from etc/savanna * Migrate to pbr * First version of job manager * Ensure that translations packaged to tarballs * Add support of periodic tasks for edp needs * Add initial oslo-related strings to the pot * First steps for i18n support * Upgrade oslo and add periodic\_task module * Check for valid flavor on cluster create * Add an API for managing job binaries in the savanna db * Add database support for JobBinary objects * Hadoop-Swift integration jar moved to the CDN * Limit requests version * Added Heap Size provisioning for Oozie * JobOrigin REST and API integration * Integration REST and conductor API * Add comment about keypairs tweak removal * Test added for sqla MutableList * Test added for sqla MutableDict * Remove legacy filtering code from sqla model base * Add test for sqlalchemy JsonEncoded type decorator * Add \_\_author\_\_ attr check * Allow Ambari port to be specified in configuration * Sync OpenStack commons with oslo-incubator * Fix custom hacking check id * Refactoring cinder support * Revert "Refactoring cinder support" * Refactoring cinder support * Migrate to Conductor * Remove timeout for id\_rsa generation * Hadoop test can turn on and turn off * Added cluster deletion during failure * Add database support for the JobOrigin object * Resolved issue with wrong comparison * Sync with global requirements * Raise eventlet to 0.13.0 * Bump hacking to 0.7 * Improve exceptions handling in created threads * Added cluster states transition logging * Add a stub API method for updating a JobOrigin object * Fail tests if cluster in Error state * Oozie bug-fixing * Added conductor API for JobExecution Object * Conductor code fixed and extended tests added * Revert "Conductor objects are re-populated on update" * Several fixes and improvements for conductor * Integration test updating for "HDP" plugin * Add initial version of the REST api for the job origin component * Made Ambari RPM location configurable * Fix test files names * Fix retrieval of updated id * Updated how\_to\_participate doc * Add check for deprecated method assertEquals * Conductor API re-init only objects, not IDs * Allow Ambari users to be specified in configuration * Added conductor API for Job Object * Conductor objects are re-populated on update * Refactoring hdp plugin * Added conductor API for DataSource object * Added first version of model for EDP * Implement to\_dict() method for Resource * Added basic helper for map-reduce actions * Bump version to 0.3 * Conductor impr for tenants and templates * Create DB tables on Savanna start * Implement object classes for Conductor * Unit test for Conductor Manager improved * Refactoring remote utils * A Resource implementation for Conductor * Tests module refactoring * Fix docs build * Fix requests version * Unit Tests and fixes for Conductor Manager API * Add check S361 for imports of savanna.db module * Update requirements to the latest versions * Improve coverage calculation * Created savanna-db-manage script for new DB * Added validation checks to HDP plugin * Workflow creator * Conductor methods added * Docs build fixed * Fix foreign keys and table names in new model * Move path manipulations into function * Fix Ganglia service start failure * Fix processing cluster configs in HDP plugin * Fix to convert parsing failure * Port sqlalchemy db models to conductor * Initial part of conductor implementation * Resolves critical issue with oozie service * Ambari install screen after install fix * Fix to OpenStack utils * Add changing owner a private key * Enforce hacking >=0.6.0 * Fix using nova\_info in HDP plugin * Added a first version of REST client for Oozie * Fix bool default values in HDP plugin * Integrate Oozie deployment for Vanilla Plugin * Removed extra from Node Group * Docs fixed for horizon * Allow sqlalchemy 0.8.X * Now swift config is not passed if Swift disable * Fix contributing.rst file * Move requirements files to the common place * Now it is possible create a hadoop config without property filter * Added REST API for job and data source with simple validation * Validate image tags to contain required tags for plugin * Docs improvements * Refactoring db module * Added REST API skeleton for EDP component * Fixes issue with ng names duplicates * Instance remote usage refactoring 0.2.1.rc1 --------- * Image Registry tags validation * Fix delete templates that are in use * Added integration test for cluster scaling * Refactoring unit tests for validation * Fix a bug in integration tests * Use console\_scripts instead of bin * Fix HDP plugin should register service urls * Oslo has been updated * Licence header added to tools/get\_auth\_token.py * Add HDP plugin to default plugins * Allow hacking 0.6.0 and fix errors from new checks * Cluster scaling bug fixing: * Added \_\_init\_\_.py to migration directory * Cluster scaling improvement * Cluster scaling bug fixing * Documents typo fixes * Unit tests for scaling validation * Cluster scaling bug fixing * Skipping non-existing instances while deletion * Cluster scaling: deletion * Status description is set on errors * Fix sqlalchemy CompileError * Add cinder validation * Validation exceptions handling improved * REST API returns traceback fix * Added config tests * Minor addition to installation guide 0.2 --- * Remove autoindex * Fix several sphinx bugs and include autoindex * Added details on registering images * Some more last-minute changes to docs * Details on enabling Anti-Affinity * Add cinder features in documentation * README and docs has been updated * Some minor changes * Docs SPI header fixed * Initial implementation of HDP plugin 0.2.rc2 ------- * Fix author/homepage in setup.py * Fix install guide to fedora and centos * Reworked installation guides * Unit tests for savanna validation * Updated development guidelines * Plugin page is added * Added improvement to code for swift test * Minor changes in documentation * Docs feature page * Docs for Jenkins page updated * Docs fixed for horizon dev istallation * Docs fixed for horizon installation * Docs for Disk Imge Builder fixed * Docs feature page * Docs for Jenkins ci added * Refactoring and changing savanna documentation 0.2.rc1 ------- * Validation checks improvements 0.2a2 ----- * Cluster scaling validation added * User's Perspective updated on overview page * AUTHORS file generation fixed * Validation added for missed scale/convert impl * Stubs for plugin and scaling ops added * Added more info into Templates section of UserGuide * Added docs for DiskImageBuilder * Api validator is now passes all api args to validators * Change default port for savanna api to 8386 * Changing default for os\_auth\_host to 127.0.0.1 * Documentation update for REST API * Cosmetic changes in the docs * SPI documentation updated * Add doc about how to write docs * The starting page for User Guide is done * Fixes AA schema defenitions in clusters and cluster templates * Revert Ilya Tyaptin to AUTHORS * Support for 'Co-Authored-By' fixed * Added swift itest and improvements to test code * Req/resp exchange logging is now configurable * Help messages for savanna-api configs improved * Database schema for 0.2 release added * Internal error message fixed * Added plugins overview for Dev Guide * Revert "unit tests for "Implemention manual scaling"" * Support of different content types cleaned * Added plugin configuration checks to validate methods * Add attaching/detaching volume unit tests * Context helper improved, avoid 500 instead of 404 * Improve context.set\_ctx to not fail * Reset context before/after request handling * The 'model\_update' helper has been added * unit tests for "Implemention manual scaling" * Updated quickstart guide * Some logging added to cluster provisioning * REST API validation implementation * Add support attach cinder volume to scale cluster * Added improvements to test code * Python 3 print check added * Updated project docs design * Add description and template id during Cluster creation * Wrote installation guide for Savanna * Added improvements to test for image registry * Updated guide for dev environment * Next gen AA implemented and small cleanup * Savanna Dashboard installation guide updated * Cluster scaling bug fixing * Restructured project documentation * UI dev guide updated * Cluster scaling: validation 0.2a1 ----- * Add attaching and detaching cinder volumes * Anti affinitity group field name fixed in validation schema * Plugin version exists check has been added * Remove dynamic serialization switching * Cluster scaling implementation * Private key for user hadoop in vanilla plugin * Rollback sitepackages fix for tox.ini * Fix version of pyflakes: pyflakes==0.7.2 * Fix pep8 and pycrypto versions, fix tox.ini * Preserve order of plugins taken from config * Added small correction to test code * Make 'Enable Swift' config in plugin priority 1 * Renamed MAPREDUCE service in plugin to MapReduce * Improvements of test for image registry * All validation schemas and functions prepared * Multi-tenancy support has been implemented * Added integration tests for cluster creation * Fix issue w/ setting service urls * Make all cluster-wide configs priority 1 in Vanilla plugin * Posargs has been added to the flake8 command * Type fixed in cinder client * Unnecessary logging removed from nova and cinder clients * Add request/response logging when debug=True * Oslo has been updated to the latest version * Requirements has been updated * ApiValidator tests moved to the right place (utils) * Fix cluster delete when instances are already deleted * The special type implemented for flavors * Fixed min volumes\_size constraint * License hacking tests has been added * The tenant\_id should not be specified in requests * NodeGroup creation request body schema validation * Type 'configs' implemented for ApiValidator * More strict images validation * Threading utils implemented * Placeholders for future validators and schemas * Basic schema validation added to images calls * The 'check\_exists' applied to all API calls * Added hadoop testing * If validation is not pass, cluster status is set to Error * Little isue with storage\_path generation fixed * Simple tests for utils/crypto * Place patches test to the right place * Base validation framework implemented * Avoid internal error while quering plugin * NotFoundException implemented * Implement \_map\_to\_user\_inputs helper * Plugins could return required image tags now * Fix minor plugin issue * MANIFEST.in has been added * Added itest and improvements to code of tests * Add info property to the Cluster object * XML coverage report added (cobertura) * Fix storage helper * NodeGroupTemplate conversion method fixed * The plugin's 'convert' method improved * Move base.py w/ unit tests to the tests root * Upgrade migration script to the latest model * Model has been updated * Use userdata instead of files for VM key-pair * Enchancement for instance interop helper * Initial migration script has been upgraded * Added cinder volumes support to vanilla plugin * Replaced all 'General' configs to 'general' * Add cover report to .gitignore * Added integration test for image registry * Heap Size can be applied for Hadoop services now * Defined Priority 1 and cluster configurations for Hadoop services * Applied Swift Integration in Vanilla Plugin: * Vanilla plugin configuration helper fixing: * Vanilla plugin configs are more informative now * Add cinderclient * Some changes were added to savanna-dashboard installation * Added integration crud tests * Update object model to support cinder volumes * Replace dumb reraise with reraise util * Now instances are deleted after rollback cluster creation * Unregister image rest api call has been added * Fix savanna.conf.sample * Move swift helper tests to the right place * Reraise exception about error during the instance creation * User keypair is now optional for cluster creation * Added fast integration test * Cluster creation moved in separate thread * Added first Savanna Controller level validation * Impl bulk ops for instance interop helper * Helper for Swift integration was added * Savanna context now is local to greenthread, not just thread * Add fqdn property to instance object * Reduce number of ssh sessions while scp muliple files * InstanceInteropHelper improvements * Conf samples has been updated * Update database defaults * Implementation of Vanilla Plugin * Print stacktrace if error occured while cluster creation * Improve cluster creation from cluster template * Support cluster creation from cluster template * Cluster templates could be now created using node group templates * REST API / (versions) endpoint has been fixed * Id of the NodeGroup is now hidden * Oslo libs has been updated * Basic impl of 'convert' method * Impl file upload for Savanna REST API utils * pbr updated to the latest version * The use\_floating\_ip flag implemented * Description is now optional in ImageRegistry * Sync tools/\*-requires with openstack/requirements * Apply minidom patch to Python prior to 2.7.3 * Use internal IPs in /etc/hosts * Sample conf fix * Images REST API cleanup * Plugin resource name fixed for REST API calls * Adding Nadya Privalova to AUTHORS * Cleanup tools/\*-requires * ImageRegistry completed * Correct todo messages * Fix for Dummy plugin * REST API samples updated * Small code improvements * Core part improvements * Pin pbr to avoid sphinx autodocs issues * Adding lintstack to support pylint testing * Documentation for Hadoop-Swift integration was added * Simple REST API call samples has been added * TemplatesRelation is now NodeGroup-like object * Plugin stub updated to the latest version of configs vision * Improve REST API bindings * instruction for dev env for horizone plugin * Adjust Config class to the docs * Enable all code style tests * Add simple plugin calls and cluster status updates * Small cleanup of db model * The 'model\_save' helper added to the context * Helper for configuration in node group * Cluster security and node placement control * The 'ctx' arg removed from plaggable provisioning * Fix remote util * Fix crypto util * Improve database model * Placeholder for instance creation has been added * Keystone auth middleware configuration fixed * User keypair added to cluster object * Remove unused variable * Patch novaclient to support getting keypairs * Introduce py33 to tox.ini * AUTHORS added to the repo * The .mailmap file updated to fix AUTHORS * Fix nova helpers (remove unneeded headers) * Hostname/username are now available in Instance * Use six to improve python 3 compatibility * Basic instance interop helpers added * Private key has been added to the Cluster object * Initial version of Savanna v0.2 0.1.2 ----- * Pre-release 0.1.2 doc changes * Replaced path to start-all.sh script * New hadoop tests were added * Small docs improvements * Integration tests improvements and fixes * Integration tests for hadoop were added * Removed unused paramter '-force' when formatting NameNode * Updated project documentation * .gitignore updated * Requires updated due to the openstack/requirements * Some improvements to documentation were added * Add changes in horizon docs * Revert "Integration tests for hadoop were added." * Integration tests for hadoop were added * cscope.out has been added to .gitignore * Change allow-cluster-ops default from False to True * bump version to 0.1.2 0.1.1 ----- * Pre-release 0.1.1 docs fixes * Cluster status fix when error during vms starting * Unnecessary whitespace has been removed 0.1.1a2 ------- * Patch for minidom's writexml has been added * Positive test for validation has been readded * The is\_node\_template\_associated function added * Added default values for JT, NN, TT, DN processes * NodeTemplate usage check moved to validation 0.1.1a1 ------- * "Last updated" info has been added to generated Sphinx pages * Common version is now used in Sphinx docs * Keystone client creation moved to the setUp() function * oslo has been updated * Adds xml hadoop config generating * Keystone removed from the global variables and added it to the class * time.sleep replaced with eventlet.sleep * Deps cleaned by openstack/requirements * Tenants support implemented for clusters * Using clear\_override in tearDown * docs fixed, tool renamed * Implements integration tests * OpenStack Common has been updated to the latest version * Some large (and slow) validation tests has been splitted to several cases * tools/install\_venv fixed * Index page updated * Validation for required process props added * /etc/hosts generator implemented * Additional info files added to repo * Re-add setuptools-git to setup.py * quickstart has been updated * All tools modev to tox * Validation tests fixed (jsonschema update) * OS Summit session nnouncement has been added * Limit cluster name to 50 characters * bump version to 0.1.1 * info about pypi has been added * Remove an invalid trove classifier * Horizon howto page updated and published 0.1 --- * setup.py has been improved * Some useful links added to README * Note about use\_floating\_ips has been added * Simple quickstart fix 0.1a2 ----- * setuptools-get has been removed from deps * AUTHORS and ChangeLog has been added to .gitignore * VM Image link has been fixed * Small index page improvement * Links to bugs and blueprints has been added * simple tests for cluster validation has been added 0.1a1 ----- * Added instruction how to get Savanna from tarball * sample-conf has been removed from savanna-manage * Added error codes to REST API docs * Trailing whitespaces has been removed from the validation messages * Side effect in SavannaTestCase has been fixed * oslo has been updated * HowToParticipiate page updated * Fixed issue when json's responses contain null values * Introduced new networking option for cluster * Fixed validation errors and wrong response codes * get\_auth\_token is now uses default configs * Added Nova resource checking in cluster creation operation * Implemented Hadoop config provisioning * resources has been added to sdist tarball * Exec permissions added to the savanna-manage command * savanna-manage added to the scripts section of setup.py * sample-conf command added to savanna-manage * Several fixes in tools and docs * Quickstart updated * SavannaTestCase added * Some hacking.py fixes and fixes in validation and cluster\_ops * hacking.py added * Tools has been improved * Service layer validation added * Tenant id is now extracted from headers; eq function added to api Resource class * Added basic validation and error handling * small refactoring - service and storage (dao) layers has been created * savanna-manage has been added; reset-db/gen-templates moved to it * Author email has been fixed * dev-conf is now supported * some confs cleanup, pyflakes added to tox * versions added to api, small api improvements * small cleanup * quickstart has been updated * docs has been updated * simple tox.ini has been added * unused config item has been removed * oslo.config is now available in pypi * renaming rollbacked to prevent problems with the old image * conf files moved to the right place * Add .gitreview file * mailing list address has been fixed * Changed pictures in docs according to Savanna name and replaced Horizon pages * Changed docs with replacement of EHO to Savanna * eho -> savanna * .mailmap fixed * .pylintrc improved * oslo conf has been updated * Build docs is now implemented using setup.py * unused arg has been removed * oslo upgraded * sample confs has been improved * logging of defaults generator has been cleaned * plain py logging replaced with oslo log * conf-print has been removed * get\_auth\_token has been fixed * stollen files has been moved to openstack package * tests runner has been fixed * unused configs has been removed * refactoring: eho.server -> eho * unused option dev removed; analyze\_opts.py removed; eho.conf.sample updated * some cleanups, tests fixed * oslo context has been added * oslo-config has been upgraded to the latest version * EHO-Horizon Setup instruction is added * Switched from self-made config to oslo.config * htp site page fixed * Corrected link in how-to-participate * tenant\_id has been removed from tests * Added bullet point for base\_image\_id in Item 4 * Polished Quick Start guide a little * small fix * some mistakes has been fixed * Added 'How to Participate' page to the docs * sources and launchpad links has been added * quickstart link has been added * quickstart has been added * Enhanced get\_auth\_token: It can get credentials and tenant from console It could be launched from any directory, not just project root * Made note in docs that we use flavor name instead of flavor id temporarily * SQLAlchemy version has been specified (>=0.7,<0.8a0) * tenant\_id has been removed from objects * run command added to README * Corrected examples in API docs * custom horizon screenshots has been added to docs * roadmap has been updated * default node\_password has been changed * Corrected API docs * if content type is undefined json should be applied * xml requests deserialization has been disabled * xml requests and responses are now supported * some oslo modules has been added * copyright has been added * cleaned * setup utils is now from oslo-incubator * .mailmap has been added * test job has been disabled * objects has been wrapped and tenants are now passed in urls and validated before app * Inserted {tenant\_id} into urls in API docs * setup.py has been added * "stolen" comment has been added * tenant\_id is now taken from headers * docs has been fixed * restapi doc has been upgraded to fit new tenant\_id style * using specifed tenant\_id * comment about tenant check has been added * apidocs generation has been disabled * docs has been updated * auth token creation helper has been added * unnecessary lambda usage has been removed * tests has been fixed to fit added auth token middleware * wsgi middlewares are now added correctly * test has been improved * missing webob dep has been added * horizon token auth is now used * openstack interop helper has been added * Now we print exceptions with stacktraces into log * bug with eternally stoping cluster in case of stoped vms has been fixed * doc has been fixed * configs has been fixed * stop\_cluster clusterop has been mocked for tests * service\_urls has been fixed (dict instead of array of dicts) * using conf files instead of hardcoded values * using conf dicts instead of global statements * REST API has been updated to v0.2 * Fixed pep8 error * Fixed VMs networking * Now we use hostnames for addressing between VMs Fixed network discovery - now we correctly identify public interface Little renaming + spell fixes * Code has been reformatted * Some pylint warns has been fixed * All docs has been ported to sphinx * Fixed pep8 and tests * Working version without coroutines * api methods has been splitted and some warns has been fixed * some warnings has been fixed * vm termination implemented * tests has been fixed * allow cluster ops flag added * warnings has been fixed * nodes are now sorted before assertEquals while creating clusters * api test has been upgraded * logging added * todo added * some fixes, clusterops are now starting using eventlet * warnings has been fixed * Added jinja templates for startup scripts * todos reformatted * Update README.rst * api test has been updated to use new defaults * pep8 has been fixed * many pylint warns has been fixed * pyflakes warnings has been fixed * readme has been updated * pylint and pyflakes static analysis has been added * sample test has been removed * Extracted string constants in cluster\_ops * add tests for delete cluster and node template * clusterops now is pep8 compliant * traceback removed * Working version of cluster deployment * defaults has been updated * test\_api -> test\_api\_v01 * some tests has been added * Cluster statuses has been added * Minor changes * may be we should move configs to the 'configs' sub-object for templates get/list responses * deletions has been added into the rest api * service api improvements (termination, nodes creation, etc) * cascade options has been added * README has been updated * Initial implementation of cluster ops. Not working yet :-) * test\_api has been updated * python style names has been reverted * new defaults is now used * RESET\_DB flag is now supported * args has been updated * new args has been added * example routines has been added for cluster creation * patching all main components * only wsgi mode now used * defaults has been updated * some cli args has been added, logging is now configurable * background execution support has been added * default conf has been cleaned * --with-xunit added to run\_tests * Readme didn't mention that you need to install a couple of dependencies first * nosetests.xml added to .gitignore * simple api test has been added * conf improved * debug=True has been removed from bin/eho-api * \*.db added to .gitignore * Readme updated * tests, coverage added * note about hooks added * incorrect scheduler call has been removed * bin added * Some fixes * Initial implementation of REST API * install\_venv fixed * Initial commit sahara-12.0.0/sahara.egg-info/0000775000175000017500000000000013656752227016023 5ustar zuulzuul00000000000000sahara-12.0.0/sahara.egg-info/entry_points.txt0000664000175000017500000000332413656752226021322 0ustar zuulzuul00000000000000[console_scripts] _sahara-subprocess = sahara.cli.sahara_subprocess:main sahara-all = sahara.cli.sahara_all:main sahara-api = sahara.cli.sahara_api:main sahara-db-manage = sahara.db.migration.cli:main sahara-engine = sahara.cli.sahara_engine:main sahara-image-pack = sahara.cli.image_pack.cli:main sahara-rootwrap = oslo_rootwrap.cmd:main sahara-status = sahara.cli.sahara_status:main sahara-templates = sahara.db.templates.cli:main [oslo.config.opts] sahara.config = sahara.config:list_opts [oslo.config.opts.defaults] sahara.config = sahara.common.config:set_cors_middleware_defaults [oslo.policy.policies] sahara = sahara.common.policies:list_rules [sahara.cluster.plugins] fake = sahara.plugins.fake.plugin:FakePluginProvider [sahara.data_source.types] hdfs = sahara.service.edp.data_sources.hdfs.implementation:HDFSType manila = sahara.service.edp.data_sources.manila.implementation:ManilaType maprfs = sahara.service.edp.data_sources.maprfs.implementation:MapRFSType s3 = sahara.service.edp.data_sources.s3.implementation:S3Type swift = sahara.service.edp.data_sources.swift.implementation:SwiftType [sahara.infrastructure.engine] heat = sahara.service.heat.heat_engine:HeatEngine [sahara.job_binary.types] internal-db = sahara.service.edp.job_binaries.internal_db.implementation:InternalDBType manila = sahara.service.edp.job_binaries.manila.implementation:ManilaType s3 = sahara.service.edp.job_binaries.s3.implementation:S3Type swift = sahara.service.edp.job_binaries.swift.implementation:SwiftType [sahara.remote] ssh = sahara.utils.ssh_remote:SshRemoteDriver [sahara.run.mode] all-in-one = sahara.service.ops:LocalOps distributed = sahara.service.ops:RemoteOps [wsgi_scripts] sahara-wsgi-api = sahara.cli.sahara_api:setup_api sahara-12.0.0/sahara.egg-info/dependency_links.txt0000664000175000017500000000000113656752226022070 0ustar zuulzuul00000000000000 sahara-12.0.0/sahara.egg-info/requires.txt0000664000175000017500000000162413656752226020425 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 alembic>=0.8.10 botocore>=1.5.1 castellan>=0.16.0 eventlet!=0.18.3,!=0.20.1,>=0.18.2 Flask>=1.0.2 iso8601>=0.1.11 Jinja2>=2.10 jsonschema>=2.6.0 keystoneauth1>=3.4.0 keystonemiddleware>=4.17.0 microversion-parse>=0.2.1 oslo.config>=5.2.0 oslo.concurrency>=3.26.0 oslo.context>=2.19.2 oslo.db>=4.27.0 oslo.i18n>=3.15.3 oslo.log>=3.36.0 oslo.messaging>=5.29.0 oslo.middleware>=3.31.0 oslo.policy>=1.30.0 oslo.rootwrap>=5.8.0 oslo.serialization!=2.19.1,>=2.18.0 oslo.service!=1.28.1,>=1.24.0 oslo.upgradecheck>=0.1.0 oslo.utils>=3.33.0 paramiko>=2.0.0 requests>=2.14.2 python-cinderclient!=4.0.0,>=3.3.0 python-keystoneclient>=3.8.0 python-manilaclient>=1.16.0 python-novaclient>=9.1.0 python-swiftclient>=3.2.0 python-neutronclient>=6.7.0 python-heatclient>=1.10.0 python-glanceclient>=2.8.0 six>=1.10.0 stevedore>=1.20.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 tooz>=1.58.0 WebOb>=1.7.1 sahara-12.0.0/sahara.egg-info/pbr.json0000664000175000017500000000005713656752226017502 0ustar zuulzuul00000000000000{"git_version": "a6ee5223", "is_release": true}sahara-12.0.0/sahara.egg-info/PKG-INFO0000664000175000017500000000402613656752226017121 0ustar zuulzuul00000000000000Metadata-Version: 1.2 Name: sahara Version: 12.0.0 Summary: Sahara project Home-page: https://docs.openstack.org/sahara/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: Apache Software License Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/sahara.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on OpenStack Data Processing ("Sahara") project ============================================ Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara Storyboard project: https://storyboard.openstack.org/#!/project/935 Sahara docs site: https://docs.openstack.org/sahara/latest/ Roadmap: https://wiki.openstack.org/wiki/Sahara/Roadmap Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html Source: https://opendev.org/openstack/sahara Bugs and feature requests: https://storyboard.openstack.org/#!/project/935 Release notes: https://docs.openstack.org/releasenotes/sahara/ License ------- Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 Platform: UNKNOWN Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Requires-Python: >=3.6 sahara-12.0.0/sahara.egg-info/top_level.txt0000664000175000017500000000000713656752226020551 0ustar zuulzuul00000000000000sahara sahara-12.0.0/sahara.egg-info/not-zip-safe0000664000175000017500000000000113656752226020250 0ustar zuulzuul00000000000000 sahara-12.0.0/sahara.egg-info/SOURCES.txt0000664000175000017500000013345613656752226017722 0ustar zuulzuul00000000000000.coveragerc .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE README.rst babel.cfg bandit.yaml bindep.txt lower-constraints.txt pylintrc requirements.txt setup.cfg setup.py test-requirements.txt tox.ini api-ref/source/conf.py api-ref/source/index.rst api-ref/source/v1.1/cluster-templates.inc api-ref/source/v1.1/clusters.inc api-ref/source/v1.1/data-sources.inc api-ref/source/v1.1/event-log.inc api-ref/source/v1.1/image-registry.inc api-ref/source/v1.1/index.rst api-ref/source/v1.1/job-binaries.inc api-ref/source/v1.1/job-binary-internals.inc api-ref/source/v1.1/job-executions.inc api-ref/source/v1.1/job-types.inc api-ref/source/v1.1/jobs.inc api-ref/source/v1.1/node-group-templates.inc api-ref/source/v1.1/parameters.yaml api-ref/source/v1.1/plugins.inc api-ref/source/v1.1/samples/cluster-templates/cluster-template-create-request.json api-ref/source/v1.1/samples/cluster-templates/cluster-template-create-response.json api-ref/source/v1.1/samples/cluster-templates/cluster-template-show-response.json api-ref/source/v1.1/samples/cluster-templates/cluster-template-update-request.json api-ref/source/v1.1/samples/cluster-templates/cluster-template-update-response.json api-ref/source/v1.1/samples/cluster-templates/cluster-templates-list-response.json api-ref/source/v1.1/samples/clusters/cluster-create-request.json api-ref/source/v1.1/samples/clusters/cluster-create-response.json api-ref/source/v1.1/samples/clusters/cluster-scale-request.json api-ref/source/v1.1/samples/clusters/cluster-scale-response.json api-ref/source/v1.1/samples/clusters/cluster-show-response.json api-ref/source/v1.1/samples/clusters/cluster-update-request.json api-ref/source/v1.1/samples/clusters/cluster-update-response.json api-ref/source/v1.1/samples/clusters/clusters-list-response.json api-ref/source/v1.1/samples/clusters/multiple-clusters-create-request.json api-ref/source/v1.1/samples/clusters/multiple-clusters-create-response.json api-ref/source/v1.1/samples/data-sources/data-source-register-hdfs-request.json api-ref/source/v1.1/samples/data-sources/data-source-register-hdfs-response.json api-ref/source/v1.1/samples/data-sources/data-source-register-swift-request.json api-ref/source/v1.1/samples/data-sources/data-source-register-swift-response.json api-ref/source/v1.1/samples/data-sources/data-source-show-response.json api-ref/source/v1.1/samples/data-sources/data-source-update-request.json api-ref/source/v1.1/samples/data-sources/data-source-update-response.json api-ref/source/v1.1/samples/data-sources/data-sources-list-response.json api-ref/source/v1.1/samples/event-log/cluster-progress-response.json api-ref/source/v1.1/samples/image-registry/image-register-request.json api-ref/source/v1.1/samples/image-registry/image-register-response.json api-ref/source/v1.1/samples/image-registry/image-show-response.json api-ref/source/v1.1/samples/image-registry/image-tags-add-request.json api-ref/source/v1.1/samples/image-registry/image-tags-add-response.json api-ref/source/v1.1/samples/image-registry/image-tags-delete-request.json api-ref/source/v1.1/samples/image-registry/image-tags-delete-response.json api-ref/source/v1.1/samples/image-registry/images-list-response.json api-ref/source/v1.1/samples/job-binaries/create-request.json api-ref/source/v1.1/samples/job-binaries/create-response.json api-ref/source/v1.1/samples/job-binaries/list-response.json api-ref/source/v1.1/samples/job-binaries/show-data-response api-ref/source/v1.1/samples/job-binaries/show-response.json api-ref/source/v1.1/samples/job-binaries/update-request.json api-ref/source/v1.1/samples/job-binaries/update-response.json api-ref/source/v1.1/samples/job-binary-internals/create-response.json api-ref/source/v1.1/samples/job-binary-internals/list-response.json api-ref/source/v1.1/samples/job-binary-internals/show-data-response api-ref/source/v1.1/samples/job-binary-internals/show-response.json api-ref/source/v1.1/samples/job-binary-internals/update-request.json api-ref/source/v1.1/samples/job-binary-internals/update-response.json api-ref/source/v1.1/samples/job-executions/cancel-response.json api-ref/source/v1.1/samples/job-executions/job-ex-response.json api-ref/source/v1.1/samples/job-executions/job-ex-update-request.json api-ref/source/v1.1/samples/job-executions/job-ex-update-response.json api-ref/source/v1.1/samples/job-executions/list-response.json api-ref/source/v1.1/samples/job-types/job-types-list-response.json api-ref/source/v1.1/samples/jobs/job-create-request.json api-ref/source/v1.1/samples/jobs/job-create-response.json api-ref/source/v1.1/samples/jobs/job-execute-request.json api-ref/source/v1.1/samples/jobs/job-execute-response.json api-ref/source/v1.1/samples/jobs/job-show-response.json api-ref/source/v1.1/samples/jobs/job-update-request.json api-ref/source/v1.1/samples/jobs/job-update-response.json api-ref/source/v1.1/samples/jobs/jobs-list-response.json api-ref/source/v1.1/samples/node-group-templates/node-group-template-create-request.json api-ref/source/v1.1/samples/node-group-templates/node-group-template-create-response.json api-ref/source/v1.1/samples/node-group-templates/node-group-template-show-response.json api-ref/source/v1.1/samples/node-group-templates/node-group-template-update-request.json api-ref/source/v1.1/samples/node-group-templates/node-group-template-update-response.json api-ref/source/v1.1/samples/node-group-templates/node-group-templates-list-response.json api-ref/source/v1.1/samples/plugins/plugin-show-response.json api-ref/source/v1.1/samples/plugins/plugin-update-request.json api-ref/source/v1.1/samples/plugins/plugin-update-response.json api-ref/source/v1.1/samples/plugins/plugin-version-show-response.json api-ref/source/v1.1/samples/plugins/plugins-list-response.json api-ref/source/v2/cluster-templates.inc api-ref/source/v2/clusters.inc api-ref/source/v2/data-sources.inc api-ref/source/v2/event-log.inc api-ref/source/v2/image-registry.inc api-ref/source/v2/index.rst api-ref/source/v2/job-binaries.inc api-ref/source/v2/job-templates.inc api-ref/source/v2/job-types.inc api-ref/source/v2/jobs.inc api-ref/source/v2/node-group-templates.inc api-ref/source/v2/parameters.yaml api-ref/source/v2/plugins.inc api-ref/source/v2/samples/cluster-templates/cluster-template-create-request.json api-ref/source/v2/samples/cluster-templates/cluster-template-create-response.json api-ref/source/v2/samples/cluster-templates/cluster-template-show-response.json api-ref/source/v2/samples/cluster-templates/cluster-template-update-request.json api-ref/source/v2/samples/cluster-templates/cluster-template-update-response.json api-ref/source/v2/samples/cluster-templates/cluster-templates-list-response.json api-ref/source/v2/samples/clusters/cluster-create-request.json api-ref/source/v2/samples/clusters/cluster-create-response.json api-ref/source/v2/samples/clusters/cluster-scale-request.json api-ref/source/v2/samples/clusters/cluster-scale-response.json api-ref/source/v2/samples/clusters/cluster-show-response.json api-ref/source/v2/samples/clusters/cluster-update-request.json api-ref/source/v2/samples/clusters/cluster-update-response.json api-ref/source/v2/samples/clusters/clusters-list-response.json api-ref/source/v2/samples/clusters/multiple-clusters-create-request.json api-ref/source/v2/samples/clusters/multiple-clusters-create-response.json api-ref/source/v2/samples/data-sources/data-source-register-hdfs-request.json api-ref/source/v2/samples/data-sources/data-source-register-hdfs-response.json api-ref/source/v2/samples/data-sources/data-source-register-swift-request.json api-ref/source/v2/samples/data-sources/data-source-register-swift-response.json api-ref/source/v2/samples/data-sources/data-source-show-response.json api-ref/source/v2/samples/data-sources/data-source-update-request.json api-ref/source/v2/samples/data-sources/data-source-update-response.json api-ref/source/v2/samples/data-sources/data-sources-list-response.json api-ref/source/v2/samples/event-log/cluster-progress-response.json api-ref/source/v2/samples/image-registry/image-register-request.json api-ref/source/v2/samples/image-registry/image-register-response.json api-ref/source/v2/samples/image-registry/image-show-response.json api-ref/source/v2/samples/image-registry/image-tags-add-request.json api-ref/source/v2/samples/image-registry/image-tags-add-response.json api-ref/source/v2/samples/image-registry/image-tags-delete-request.json api-ref/source/v2/samples/image-registry/image-tags-delete-response.json api-ref/source/v2/samples/image-registry/images-list-response.json api-ref/source/v2/samples/job-binaries/create-request.json api-ref/source/v2/samples/job-binaries/create-response.json api-ref/source/v2/samples/job-binaries/list-response.json api-ref/source/v2/samples/job-binaries/show-data-response api-ref/source/v2/samples/job-binaries/show-response.json api-ref/source/v2/samples/job-binaries/update-request.json api-ref/source/v2/samples/job-binaries/update-response.json api-ref/source/v2/samples/job-templates/job-template-create-request.json api-ref/source/v2/samples/job-templates/job-template-create-response.json api-ref/source/v2/samples/job-templates/job-template-show-response.json api-ref/source/v2/samples/job-templates/job-template-update-request.json api-ref/source/v2/samples/job-templates/job-template-update-response.json api-ref/source/v2/samples/job-templates/job-templates-list-response.json api-ref/source/v2/samples/job-types/job-types-list-response.json api-ref/source/v2/samples/jobs/cancel-response.json api-ref/source/v2/samples/jobs/job-request.json api-ref/source/v2/samples/jobs/job-response.json api-ref/source/v2/samples/jobs/job-update-request.json api-ref/source/v2/samples/jobs/job-update-response.json api-ref/source/v2/samples/jobs/list-response.json api-ref/source/v2/samples/node-group-templates/node-group-template-create-request.json api-ref/source/v2/samples/node-group-templates/node-group-template-create-response.json api-ref/source/v2/samples/node-group-templates/node-group-template-show-response.json api-ref/source/v2/samples/node-group-templates/node-group-template-update-request.json api-ref/source/v2/samples/node-group-templates/node-group-template-update-response.json api-ref/source/v2/samples/node-group-templates/node-group-templates-list-response.json api-ref/source/v2/samples/plugins/plugin-show-response.json api-ref/source/v2/samples/plugins/plugin-update-request.json api-ref/source/v2/samples/plugins/plugin-update-response.json api-ref/source/v2/samples/plugins/plugin-version-show-response.json api-ref/source/v2/samples/plugins/plugins-list-response.json devstack/README.rst devstack/exercise.sh devstack/plugin.sh devstack/settings devstack/files/apache-sahara-api.template devstack/upgrade/resources.sh devstack/upgrade/settings devstack/upgrade/shutdown.sh devstack/upgrade/upgrade.sh devstack/upgrade/from-liberty/upgrade-sahara devstack/upgrade/from-mitaka/upgrade-sahara devstack/upgrade/from-rocky/upgrade-sahara doc/requirements.txt doc/source/conf.py doc/source/config-generator.conf doc/source/index.rst doc/source/_extra/.htaccess doc/source/_templates/sidebarlinks.html doc/source/_theme_rtd/layout.html doc/source/_theme_rtd/theme.conf doc/source/admin/advanced-configuration-guide.rst doc/source/admin/configs-recommendations.rst doc/source/admin/configuration-guide.rst doc/source/admin/index.rst doc/source/admin/upgrade-guide.rst doc/source/cli/index.rst doc/source/cli/sahara-status.rst doc/source/configuration/descriptionconfig.rst doc/source/configuration/index.rst doc/source/configuration/sampleconfig.rst doc/source/contributor/adding-database-migrations.rst doc/source/contributor/apiv2.rst doc/source/contributor/contributing.rst doc/source/contributor/dashboard-dev-environment-guide.rst doc/source/contributor/development-environment.rst doc/source/contributor/development-guidelines.rst doc/source/contributor/devstack.rst doc/source/contributor/gerrit.rst doc/source/contributor/how-to-build-oozie.rst doc/source/contributor/image-gen.rst doc/source/contributor/index.rst doc/source/contributor/jenkins.rst doc/source/contributor/log-guidelines.rst doc/source/contributor/testing.rst doc/source/images/hadoop-cluster-example.jpg doc/source/images/openstack-interop.png doc/source/images/sahara-architecture.svg doc/source/install/dashboard-guide.rst doc/source/install/index.rst doc/source/install/installation-guide.rst doc/source/intro/architecture.rst doc/source/intro/index.rst doc/source/intro/overview.rst doc/source/reference/edp-spi.rst doc/source/reference/index.rst doc/source/reference/plugin-spi.rst doc/source/reference/plugins.rst doc/source/reference/restapi.rst doc/source/user/building-guest-images.rst doc/source/user/dashboard-user-guide.rst doc/source/user/edp-s3.rst doc/source/user/edp.rst doc/source/user/features.rst doc/source/user/hadoop-swift.rst doc/source/user/index.rst doc/source/user/overview.rst doc/source/user/plugins.rst doc/source/user/quickstart.rst doc/source/user/registering-image.rst doc/source/user/sahara-on-ironic.rst doc/source/user/statuses.rst doc/source/user/building-guest-images/baremetal.rst doc/source/user/building-guest-images/sahara-image-create.rst doc/source/user/building-guest-images/sahara-image-pack.rst doc/test/redirect-tests.txt etc/edp-examples/README.rst etc/sahara/README-sahara.conf.txt etc/sahara/api-paste.ini etc/sahara/compute.topology.sample etc/sahara/rootwrap.conf etc/sahara/swift.topology.sample etc/sahara/rootwrap.d/sahara.filters etc/sudoers.d/sahara-rootwrap playbooks/buildimages/run.yaml playbooks/sahara-grenade/post.yaml playbooks/sahara-grenade/run.yaml releasenotes/notes/.placeholder releasenotes/notes/add-impala-2.2-c1649599649aff5c.yaml releasenotes/notes/add-mapr-520-3ed6cd0ae9688e17.yaml releasenotes/notes/add-mapr-kafka-3a808bbc1aa21055.yaml releasenotes/notes/add-mapr-sentry-6012c08b55d679de.yaml releasenotes/notes/add-scheduler-edp-job-9eda17dd174e53fa.yaml releasenotes/notes/add-storm-version-1_1_0-3e10b34824706a62.yaml releasenotes/notes/add-upgrade-check-framework-9cd18dbc47b0efbd.yaml releasenotes/notes/add-wsgi-server-support-c8fbc3d76d4e42f6.yaml releasenotes/notes/add_kafka_in_cdh-774c7c051480c892.yaml releasenotes/notes/add_mapr_repo_configs-04af1a67350bfd24.yaml releasenotes/notes/ambari-agent-pkg-install-timeout-param-d50e5c15e06fa51e.yaml releasenotes/notes/ambari-downscaling-b9ba759ce9c7325e.yaml releasenotes/notes/ambari-hive-92b911e0a759ee88.yaml releasenotes/notes/ambari-server-start-856403bc280dfba3.yaml releasenotes/notes/ambari26-image-pack-88c9aad59bf635b2.yaml releasenotes/notes/ambari_2_4_image_generation_validation-47eabb9fa90384c8.yaml releasenotes/notes/api-insecure-cbd4fd5da71b29a3.yaml releasenotes/notes/api-v2-return-payload-a84a609db410228a.yaml releasenotes/notes/apiv2-microversion-4c1a58ee8090e5a9.yaml releasenotes/notes/apiv2-payload-tweaks-b73c20a35263d958.yaml releasenotes/notes/apiv2-preview-release-b1ee8cc9b2fb01da.yaml releasenotes/notes/apiv2-stable-release-25ba9920c8e4632a.yaml releasenotes/notes/auto_configs_for_hdp-011d460d37dcdf02.yaml releasenotes/notes/boot-from-volume-e7078452fac1a4a0.yaml releasenotes/notes/ca-cert-fix-5c434a82f9347039.yaml releasenotes/notes/cdh-5-5-35e582e149a05632.yaml releasenotes/notes/cdh-513-bdce0d5d269d8f20.yaml releasenotes/notes/cdh-labels-5695d95bce226051.yaml releasenotes/notes/cdh_5_11_0_image_generation_validation-6334ef6d04950935.yaml releasenotes/notes/cdh_5_11_support-10d4abb91bc4475f.yaml releasenotes/notes/cdh_5_7_image_generation_validation-308e7529a9018663.yaml releasenotes/notes/cdh_5_7_support-9522cb9b4dce2378.yaml releasenotes/notes/cdh_5_9_0_image_generation_validation-19d10e6468e30b4f.yaml releasenotes/notes/cdh_5_9_support-b603a2648b2e7b32.yaml releasenotes/notes/config-groups-ambari-837de6d33eb0fa87.yaml releasenotes/notes/consolidate-cluster-creation-apiv2-5d5aceeb2e97c702.yaml releasenotes/notes/convert-to-cluster-template-43d502496d18625e.yaml releasenotes/notes/deprecate-cdh_5_5-0da56b562170566f.yaml releasenotes/notes/deprecate-hdp-a9ff0ecf6006da49.yaml releasenotes/notes/deprecate-mapr-51-090423438e3dda20.yaml releasenotes/notes/deprecate-plugin-vanilla260-46e4b8fe96e8fe68.yaml releasenotes/notes/deprecate-sahara-all-entry-point-1446a00dab643b7b.yaml releasenotes/notes/deprecate-spark-version-131-98eccc79b13b6b8f.yaml releasenotes/notes/deprecate-storm-version-092.yaml-b9ff2b9ebbb983fc.yaml releasenotes/notes/designate-integration-784c5f7f29546015.yaml releasenotes/notes/drop-py-2-7-bc282e43b26fbf17.yaml releasenotes/notes/enable-mutable-configuration-2dd6b7a0e0fe4437.yaml releasenotes/notes/engine-opt-258ff1ae9b04d628.yaml releasenotes/notes/enhance-bfv-12bac06c4438675f.yaml releasenotes/notes/event_log_for_hdp-a114511c477ef16d.yaml releasenotes/notes/fix-install-provision-events-c1bd2e05bf2be6bd.yaml releasenotes/notes/fixing-policy-inconsistencies-984020000cc3882a.yaml releasenotes/notes/force-delete-apiv2-e372392bbc8639f8.yaml releasenotes/notes/force-delete-changes-2e0881a99742c339.yaml releasenotes/notes/hadoop-swift-domain-fix-c1dfdf6c52b5aa25.yaml releasenotes/notes/hadoop-swift-jar-for-ambari-4439913b01d42468.yaml releasenotes/notes/hdfs-dfs-94a9c4f64cf8994f.yaml releasenotes/notes/hdp-removed-from-defaults-31d1e1f15973b682.yaml releasenotes/notes/hdp25-b35ef99c240fc127.yaml releasenotes/notes/hdp26-5a406d7066706bf1.yaml releasenotes/notes/honor-endpoint-type-neutron-4583128c383d9745.yaml releasenotes/notes/ironic-support-79e7ecad05f54029.yaml releasenotes/notes/kerberos-76dd297462b7337c.yaml releasenotes/notes/key_manager_integration-e32d141809c8cc46.yaml releasenotes/notes/keypair-replacement-0c0cc3db0551c112.yaml releasenotes/notes/keystoneclient-to-keystonauth-migration-c75988975ad1a506.yaml releasenotes/notes/mapr-health-check-2eba3d742a2b853f.yaml releasenotes/notes/mapr-labels-5cc318616db59403.yaml releasenotes/notes/mapr-remove-spark-standalone-293ca864de9a7848.yaml releasenotes/notes/mapr-services-new-versions-b32c2e8fe07d1600.yaml releasenotes/notes/mapr-services-new-versions-dc7652e33f26bbdc.yaml releasenotes/notes/mapr5.2.0-image-gen-c850e74977b00abe.yaml releasenotes/notes/neutron-default-a6baf93d857d86b3.yaml releasenotes/notes/nova-network-removal-debe306fd7c61268.yaml releasenotes/notes/novaclient_images_to_glanceclient-0266a2bd92b4be05.yaml releasenotes/notes/ntp-config-51ed9d612132e2fa.yaml releasenotes/notes/optional-project-id-apiv1-2e89756f6f16bd5e.yaml releasenotes/notes/options-to-oslo_messaging_notifications-cee206fc4f74c217.yaml releasenotes/notes/plugins-split-from-sahara-core-9ffc5e5d06c9239c.yaml releasenotes/notes/policy_in_code-5847902775ff9861.yaml releasenotes/notes/proxy-user-lowercase-f116f7b7e89274cb.yaml releasenotes/notes/rack_awareness_for_cdh-e0cd5d4ab46aa1b5.yaml releasenotes/notes/rack_awareness_for_hdp-6e3d44468cc141a5.yaml releasenotes/notes/refactor-floating-ips-logic-9d37d9297f3621b3.yaml releasenotes/notes/remove-cdh_5.0_5.3_5.4-b5f140e9b0233c07.yaml releasenotes/notes/remove-hard-coded-oozie-password-b97475c8772aa1bd.yaml releasenotes/notes/remove-hardcoded-password-from-hive-eb923b518974e853.yaml releasenotes/notes/remove-hdp-137d0ad3d2389b7a.yaml releasenotes/notes/remove-mapr-500-3df3041be99a864c.yaml releasenotes/notes/remove-spark-100-44f3d5efc3806410.yaml releasenotes/notes/remove-upload-oozie-sharelib-step-in-vanilla-2.8.2-546b2026e2f5d557.yaml releasenotes/notes/remove-use-neutron-2499b661dce041d4.yaml releasenotes/notes/remove_custom_auth_domainname-984fd2d931e306cc.yaml releasenotes/notes/remove_enable_notifications_opt-4c0d46e8e79eb06f.yaml releasenotes/notes/s3-datasource-protocol-d3abd0b22f653b3b.yaml releasenotes/notes/sahara-cfg-location-change-7b61454311b16ce8.yaml releasenotes/notes/sahara-endpoint-version-discovery-826e9f31093cb10f.yaml releasenotes/notes/some-polish-api-v2-2d2e390a74b088f9.yaml releasenotes/notes/spark-2.2-d7c3a84bd52f735a.yaml releasenotes/notes/spark-2.3-0277fe9feae6668a.yaml releasenotes/notes/storm-1.2-af75fedb413de56a.yaml releasenotes/notes/strict-validation-query-string-a6cadbf2f9c57d06.yaml releasenotes/notes/substring-matching-1d5981b8e5b1d919.yaml releasenotes/notes/support-s3-data-source-a912e2cdf4cd51fb.yaml releasenotes/notes/support-s3-job-binary-6d91267ae11d09d3.yaml releasenotes/notes/transport_url-5bbbf0bb54d81727.yaml releasenotes/notes/trustee-conf-section-5994dcd48a9744d7.yaml releasenotes/notes/updating-plugins-versions-b8d27764178c3cdd.yaml releasenotes/notes/vanilla-2.7.5-support-ffeeb88fc4be34b4.yaml releasenotes/notes/vanilla-2.8.2-support-84c89aad31105584.yaml releasenotes/notes/zookeeper-configuration-steps-48c3d9706c86f227.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder roles/build-sahara-images-cli/README.rst roles/build-sahara-images-cli/defaults/main.yaml roles/build-sahara-images-cli/tasks/main.yaml sahara/__init__.py sahara/config.py sahara/context.py sahara/exceptions.py sahara/i18n.py sahara/main.py sahara/version.py sahara.egg-info/PKG-INFO sahara.egg-info/SOURCES.txt sahara.egg-info/dependency_links.txt sahara.egg-info/entry_points.txt sahara.egg-info/not-zip-safe sahara.egg-info/pbr.json sahara.egg-info/requires.txt sahara.egg-info/top_level.txt sahara/api/__init__.py sahara/api/acl.py sahara/api/base.py sahara/api/microversion.py sahara/api/v10.py sahara/api/v11.py sahara/api/middleware/__init__.py sahara/api/middleware/auth_valid.py sahara/api/middleware/sahara_middleware.py sahara/api/middleware/version_discovery.py sahara/api/v2/__init__.py sahara/api/v2/cluster_templates.py sahara/api/v2/clusters.py sahara/api/v2/data_sources.py sahara/api/v2/images.py sahara/api/v2/job_binaries.py sahara/api/v2/job_templates.py sahara/api/v2/job_types.py sahara/api/v2/jobs.py sahara/api/v2/node_group_templates.py sahara/api/v2/plugins.py sahara/cli/__init__.py sahara/cli/sahara_all.py sahara/cli/sahara_api.py sahara/cli/sahara_engine.py sahara/cli/sahara_status.py sahara/cli/sahara_subprocess.py sahara/cli/image_pack/__init__.py sahara/cli/image_pack/api.py sahara/cli/image_pack/cli.py sahara/common/__init__.py sahara/common/config.py sahara/common/policies/__init__.py sahara/common/policies/base.py sahara/common/policies/cluster.py sahara/common/policies/cluster_template.py sahara/common/policies/cluster_templates.py sahara/common/policies/clusters.py sahara/common/policies/data_source.py sahara/common/policies/data_sources.py sahara/common/policies/image.py sahara/common/policies/images.py sahara/common/policies/job.py sahara/common/policies/job_binaries.py sahara/common/policies/job_binary.py sahara/common/policies/job_binary_internals.py sahara/common/policies/job_executions.py sahara/common/policies/job_template.py sahara/common/policies/job_type.py sahara/common/policies/job_types.py sahara/common/policies/jobs.py sahara/common/policies/node_group_template.py sahara/common/policies/node_group_templates.py sahara/common/policies/plugin.py sahara/common/policies/plugins.py sahara/conductor/__init__.py sahara/conductor/api.py sahara/conductor/manager.py sahara/conductor/objects.py sahara/conductor/resource.py sahara/db/__init__.py sahara/db/api.py sahara/db/base.py sahara/db/migration/__init__.py sahara/db/migration/alembic.ini sahara/db/migration/cli.py sahara/db/migration/alembic_migrations/README.md sahara/db/migration/alembic_migrations/env.py sahara/db/migration/alembic_migrations/script.py.mako sahara/db/migration/alembic_migrations/versions/001_icehouse.py sahara/db/migration/alembic_migrations/versions/002_placeholder.py sahara/db/migration/alembic_migrations/versions/003_placeholder.py sahara/db/migration/alembic_migrations/versions/004_placeholder.py sahara/db/migration/alembic_migrations/versions/005_placeholder.py sahara/db/migration/alembic_migrations/versions/006_placeholder.py sahara/db/migration/alembic_migrations/versions/007_increase_status_description_size.py sahara/db/migration/alembic_migrations/versions/008_security_groups.py sahara/db/migration/alembic_migrations/versions/009_rollback_info.py sahara/db/migration/alembic_migrations/versions/010_auto_security_groups.py sahara/db/migration/alembic_migrations/versions/011_sahara_info.py sahara/db/migration/alembic_migrations/versions/012_availability_zone.py sahara/db/migration/alembic_migrations/versions/013_volumes_availability_zone.py sahara/db/migration/alembic_migrations/versions/014_add_volume_type.py sahara/db/migration/alembic_migrations/versions/015_add_events_objects.py sahara/db/migration/alembic_migrations/versions/016_is_proxy_gateway.py sahara/db/migration/alembic_migrations/versions/017_drop_progress.py sahara/db/migration/alembic_migrations/versions/018_volume_local_to_instance.py sahara/db/migration/alembic_migrations/versions/019_is_default_for_templates.py sahara/db/migration/alembic_migrations/versions/020_remove_redandunt_progress_ops.py sahara/db/migration/alembic_migrations/versions/021_datasource_placeholders.py sahara/db/migration/alembic_migrations/versions/022_add_job_interface.py sahara/db/migration/alembic_migrations/versions/023_add_use_autoconfig.py sahara/db/migration/alembic_migrations/versions/024_manila_shares.py sahara/db/migration/alembic_migrations/versions/025_increase_ip_column_size.py sahara/db/migration/alembic_migrations/versions/026_add_is_public_is_protected.py sahara/db/migration/alembic_migrations/versions/027_rename_oozie_job_id.py sahara/db/migration/alembic_migrations/versions/028_storage_devices_number.py sahara/db/migration/alembic_migrations/versions/029_set_is_protected_on_is_default.py sahara/db/migration/alembic_migrations/versions/030-health-check.py sahara/db/migration/alembic_migrations/versions/031_added_plugins_table.py sahara/db/migration/alembic_migrations/versions/032_add_domain_name.py sahara/db/migration/alembic_migrations/versions/033_add_anti_affinity_ratio_field_to_cluster.py sahara/db/migration/alembic_migrations/versions/034_boot_from_volume.py sahara/db/migration/alembic_migrations/versions/035_boot_from_volume_enhancements.py sahara/db/sqlalchemy/__init__.py sahara/db/sqlalchemy/api.py sahara/db/sqlalchemy/model_base.py sahara/db/sqlalchemy/models.py sahara/db/sqlalchemy/types.py sahara/db/templates/README.rst sahara/db/templates/__init__.py sahara/db/templates/api.py sahara/db/templates/cli.py sahara/db/templates/utils.py sahara/locale/de/LC_MESSAGES/sahara.po sahara/plugins/__init__.py sahara/plugins/base.py sahara/plugins/castellan_utils.py sahara/plugins/conductor.py sahara/plugins/context.py sahara/plugins/db.py sahara/plugins/edp.py sahara/plugins/exceptions.py sahara/plugins/health_check_base.py sahara/plugins/images.py sahara/plugins/kerberos.py sahara/plugins/labels.py sahara/plugins/main.py sahara/plugins/objects.py sahara/plugins/opts.py sahara/plugins/provisioning.py sahara/plugins/recommendations_utils.py sahara/plugins/resource.py sahara/plugins/service_api.py sahara/plugins/swift_helper.py sahara/plugins/swift_utils.py sahara/plugins/testutils.py sahara/plugins/topology_helper.py sahara/plugins/utils.py sahara/plugins/default_templates/template.conf sahara/plugins/default_templates/ambari/v2_3/cluster.json sahara/plugins/default_templates/ambari/v2_3/master-edp.json sahara/plugins/default_templates/ambari/v2_3/master.json sahara/plugins/default_templates/ambari/v2_3/worker.json sahara/plugins/default_templates/ambari/v2_4/cluster.json sahara/plugins/default_templates/ambari/v2_4/master-edp.json sahara/plugins/default_templates/ambari/v2_4/master.json sahara/plugins/default_templates/ambari/v2_4/worker.json sahara/plugins/default_templates/ambari/v2_5/cluster.json sahara/plugins/default_templates/ambari/v2_5/master-edp.json sahara/plugins/default_templates/ambari/v2_5/master.json sahara/plugins/default_templates/ambari/v2_5/worker.json sahara/plugins/default_templates/cdh/v5_5_0/cluster.json sahara/plugins/default_templates/cdh/v5_5_0/manager.json sahara/plugins/default_templates/cdh/v5_5_0/master-additional.json sahara/plugins/default_templates/cdh/v5_5_0/master-core.json sahara/plugins/default_templates/cdh/v5_5_0/worker-nm-dn.json sahara/plugins/default_templates/cdh/v5_7_0/cluster.json sahara/plugins/default_templates/cdh/v5_7_0/manager.json sahara/plugins/default_templates/cdh/v5_7_0/master-additional.json sahara/plugins/default_templates/cdh/v5_7_0/master-core.json sahara/plugins/default_templates/cdh/v5_7_0/worker-nm-dn.json sahara/plugins/default_templates/cdh/v5_9_0/cluster.json sahara/plugins/default_templates/cdh/v5_9_0/manager.json sahara/plugins/default_templates/cdh/v5_9_0/master-additional.json sahara/plugins/default_templates/cdh/v5_9_0/master-core.json sahara/plugins/default_templates/cdh/v5_9_0/worker-nm-dn.json sahara/plugins/default_templates/mapr/5_0_0_mrv2/cluster.json sahara/plugins/default_templates/mapr/5_0_0_mrv2/master.json sahara/plugins/default_templates/mapr/5_0_0_mrv2/worker.json sahara/plugins/default_templates/mapr/v5_1_0_mrv2/cluster.json sahara/plugins/default_templates/mapr/v5_1_0_mrv2/master.json sahara/plugins/default_templates/mapr/v5_1_0_mrv2/worker.json sahara/plugins/default_templates/mapr/v5_2_0_mrv2/cluster.json sahara/plugins/default_templates/mapr/v5_2_0_mrv2/master.json sahara/plugins/default_templates/mapr/v5_2_0_mrv2/worker.json sahara/plugins/default_templates/spark/v1_3_1/cluster.json sahara/plugins/default_templates/spark/v1_3_1/master.json sahara/plugins/default_templates/spark/v1_3_1/slave.json sahara/plugins/default_templates/spark/v1_6_0/cluster.json sahara/plugins/default_templates/spark/v1_6_0/master.json sahara/plugins/default_templates/spark/v1_6_0/slave.json sahara/plugins/default_templates/spark/v2_1_0/cluster.json sahara/plugins/default_templates/spark/v2_1_0/master.json sahara/plugins/default_templates/spark/v2_1_0/slave.json sahara/plugins/default_templates/storm/v1_0_1/cluster.json sahara/plugins/default_templates/storm/v1_0_1/master.json sahara/plugins/default_templates/storm/v1_0_1/slave.json sahara/plugins/default_templates/storm/v1_1_0/cluster.json sahara/plugins/default_templates/storm/v1_1_0/master.json sahara/plugins/default_templates/storm/v1_1_0/slave.json sahara/plugins/default_templates/vanilla/v2_7_1/cluster.json sahara/plugins/default_templates/vanilla/v2_7_1/master.json sahara/plugins/default_templates/vanilla/v2_7_1/worker.json sahara/plugins/fake/__init__.py sahara/plugins/fake/edp_engine.py sahara/plugins/fake/plugin.py sahara/plugins/resources/create-principal-keytab sahara/plugins/resources/cron-file sahara/plugins/resources/cron-script sahara/plugins/resources/kdc_conf sahara/plugins/resources/kdc_conf_redhat sahara/plugins/resources/krb-client-init.sh.template sahara/plugins/resources/krb5_config sahara/plugins/resources/mit-kdc-server-init.sh.template sahara/service/__init__.py sahara/service/coordinator.py sahara/service/engine.py sahara/service/networks.py sahara/service/ntp_service.py sahara/service/ops.py sahara/service/periodic.py sahara/service/quotas.py sahara/service/sessions.py sahara/service/trusts.py sahara/service/validation.py sahara/service/volumes.py sahara/service/api/__init__.py sahara/service/api/v10.py sahara/service/api/v11.py sahara/service/api/v2/__init__.py sahara/service/api/v2/cluster_templates.py sahara/service/api/v2/clusters.py sahara/service/api/v2/data_sources.py sahara/service/api/v2/images.py sahara/service/api/v2/job_binaries.py sahara/service/api/v2/job_templates.py sahara/service/api/v2/job_types.py sahara/service/api/v2/jobs.py sahara/service/api/v2/node_group_templates.py sahara/service/api/v2/plugins.py sahara/service/castellan/__init__.py sahara/service/castellan/config.py sahara/service/castellan/sahara_key_manager.py sahara/service/castellan/utils.py sahara/service/edp/__init__.py sahara/service/edp/base_engine.py sahara/service/edp/hdfs_helper.py sahara/service/edp/job_manager.py sahara/service/edp/job_utils.py sahara/service/edp/s3_common.py sahara/service/edp/shares.py sahara/service/edp/binary_retrievers/__init__.py sahara/service/edp/binary_retrievers/dispatch.py sahara/service/edp/binary_retrievers/internal_swift.py sahara/service/edp/binary_retrievers/manila_share.py sahara/service/edp/binary_retrievers/s3_storage.py sahara/service/edp/binary_retrievers/sahara_db.py sahara/service/edp/data_sources/__init__.py sahara/service/edp/data_sources/base.py sahara/service/edp/data_sources/manager.py sahara/service/edp/data_sources/opts.py sahara/service/edp/data_sources/hdfs/__init__.py sahara/service/edp/data_sources/hdfs/implementation.py sahara/service/edp/data_sources/manila/__init__.py sahara/service/edp/data_sources/manila/implementation.py sahara/service/edp/data_sources/maprfs/__init__.py sahara/service/edp/data_sources/maprfs/implementation.py sahara/service/edp/data_sources/s3/__init__.py sahara/service/edp/data_sources/s3/implementation.py sahara/service/edp/data_sources/swift/__init__.py sahara/service/edp/data_sources/swift/implementation.py sahara/service/edp/job_binaries/__init__.py sahara/service/edp/job_binaries/base.py sahara/service/edp/job_binaries/manager.py sahara/service/edp/job_binaries/opts.py sahara/service/edp/job_binaries/internal_db/__init__.py sahara/service/edp/job_binaries/internal_db/implementation.py sahara/service/edp/job_binaries/manila/__init__.py sahara/service/edp/job_binaries/manila/implementation.py sahara/service/edp/job_binaries/s3/__init__.py sahara/service/edp/job_binaries/s3/implementation.py sahara/service/edp/job_binaries/swift/__init__.py sahara/service/edp/job_binaries/swift/implementation.py sahara/service/edp/oozie/__init__.py sahara/service/edp/oozie/engine.py sahara/service/edp/oozie/oozie.py sahara/service/edp/oozie/workflow_creator/__init__.py sahara/service/edp/oozie/workflow_creator/base_workflow.py sahara/service/edp/oozie/workflow_creator/hive_workflow.py sahara/service/edp/oozie/workflow_creator/java_workflow.py sahara/service/edp/oozie/workflow_creator/mapreduce_workflow.py sahara/service/edp/oozie/workflow_creator/pig_workflow.py sahara/service/edp/oozie/workflow_creator/shell_workflow.py sahara/service/edp/oozie/workflow_creator/workflow_factory.py sahara/service/edp/resources/edp-main-wrapper.jar sahara/service/edp/resources/edp-spark-wrapper.jar sahara/service/edp/resources/hive-default.xml sahara/service/edp/resources/launch_command.py sahara/service/edp/resources/mapred-default.xml sahara/service/edp/resources/mapred-job-config.xml sahara/service/edp/resources/workflow.xml sahara/service/edp/spark/__init__.py sahara/service/edp/spark/engine.py sahara/service/edp/storm/__init__.py sahara/service/edp/storm/engine.py sahara/service/edp/utils/__init__.py sahara/service/edp/utils/shares.py sahara/service/health/__init__.py sahara/service/health/common.py sahara/service/health/verification_base.py sahara/service/heat/__init__.py sahara/service/heat/commons.py sahara/service/heat/heat_engine.py sahara/service/heat/templates.py sahara/service/validations/__init__.py sahara/service/validations/acl.py sahara/service/validations/base.py sahara/service/validations/cluster_template_schema.py sahara/service/validations/cluster_templates.py sahara/service/validations/clusters.py sahara/service/validations/clusters_scaling.py sahara/service/validations/clusters_schema.py sahara/service/validations/images.py sahara/service/validations/node_group_template_schema.py sahara/service/validations/node_group_templates.py sahara/service/validations/plugins.py sahara/service/validations/shares.py sahara/service/validations/edp/__init__.py sahara/service/validations/edp/base.py sahara/service/validations/edp/data_source.py sahara/service/validations/edp/data_source_schema.py sahara/service/validations/edp/job.py sahara/service/validations/edp/job_binary.py sahara/service/validations/edp/job_binary_internal.py sahara/service/validations/edp/job_binary_internal_schema.py sahara/service/validations/edp/job_binary_schema.py sahara/service/validations/edp/job_execution.py sahara/service/validations/edp/job_execution_schema.py sahara/service/validations/edp/job_interface.py sahara/service/validations/edp/job_schema.py sahara/swift/__init__.py sahara/swift/swift_helper.py sahara/swift/utils.py sahara/swift/resources/conf-template.xml sahara/tests/README.rst sahara/tests/__init__.py sahara/tests/unit/__init__.py sahara/tests/unit/base.py sahara/tests/unit/test_context.py sahara/tests/unit/test_exceptions.py sahara/tests/unit/test_main.py sahara/tests/unit/testutils.py sahara/tests/unit/api/__init__.py sahara/tests/unit/api/test_acl.py sahara/tests/unit/api/middleware/__init__.py sahara/tests/unit/api/middleware/test_auth_valid.py sahara/tests/unit/cli/__init__.py sahara/tests/unit/cli/test_sahara_cli.py sahara/tests/unit/cli/test_sahara_status.py sahara/tests/unit/cli/image_pack/__init__.py sahara/tests/unit/cli/image_pack/test_image_pack_api.py sahara/tests/unit/conductor/__init__.py sahara/tests/unit/conductor/base.py sahara/tests/unit/conductor/test_api.py sahara/tests/unit/conductor/test_resource.py sahara/tests/unit/conductor/manager/__init__.py sahara/tests/unit/conductor/manager/test_clusters.py sahara/tests/unit/conductor/manager/test_defaults.py sahara/tests/unit/conductor/manager/test_edp.py sahara/tests/unit/conductor/manager/test_edp_interface.py sahara/tests/unit/conductor/manager/test_from_template.py sahara/tests/unit/conductor/manager/test_templates.py sahara/tests/unit/db/__init__.py sahara/tests/unit/db/test_utils.py sahara/tests/unit/db/migration/__init__.py sahara/tests/unit/db/migration/test_db_manage_cli.py sahara/tests/unit/db/migration/test_migrations.py sahara/tests/unit/db/migration/test_migrations_base.py sahara/tests/unit/db/sqlalchemy/__init__.py sahara/tests/unit/db/sqlalchemy/test_types.py sahara/tests/unit/db/templates/__init__.py sahara/tests/unit/db/templates/common.py sahara/tests/unit/db/templates/test_delete.py sahara/tests/unit/db/templates/test_update.py sahara/tests/unit/db/templates/test_utils.py sahara/tests/unit/plugins/__init__.py sahara/tests/unit/plugins/test_base_plugins_support.py sahara/tests/unit/plugins/test_images.py sahara/tests/unit/plugins/test_kerberos.py sahara/tests/unit/plugins/test_labels.py sahara/tests/unit/plugins/test_provide_recommendations.py sahara/tests/unit/plugins/test_provisioning.py sahara/tests/unit/plugins/test_utils.py sahara/tests/unit/resources/dfs_admin_0_nodes.txt sahara/tests/unit/resources/dfs_admin_1_nodes.txt sahara/tests/unit/resources/dfs_admin_3_nodes.txt sahara/tests/unit/resources/test-default.xml sahara/tests/unit/service/__init__.py sahara/tests/unit/service/test_coordinator.py sahara/tests/unit/service/test_engine.py sahara/tests/unit/service/test_networks.py sahara/tests/unit/service/test_ntp_service.py sahara/tests/unit/service/test_ops.py sahara/tests/unit/service/test_periodic.py sahara/tests/unit/service/test_quotas.py sahara/tests/unit/service/test_sessions.py sahara/tests/unit/service/test_trusts.py sahara/tests/unit/service/test_volumes.py sahara/tests/unit/service/api/__init__.py sahara/tests/unit/service/api/test_v10.py sahara/tests/unit/service/api/v2/__init__.py sahara/tests/unit/service/api/v2/base.py sahara/tests/unit/service/api/v2/test_clusters.py sahara/tests/unit/service/api/v2/test_images.py sahara/tests/unit/service/api/v2/test_plugins.py sahara/tests/unit/service/castellan/__init__.py sahara/tests/unit/service/castellan/test_sahara_key_manager.py sahara/tests/unit/service/edp/__init__.py sahara/tests/unit/service/edp/edp_test_utils.py sahara/tests/unit/service/edp/test_hdfs_helper.py sahara/tests/unit/service/edp/test_job_manager.py sahara/tests/unit/service/edp/test_job_possible_configs.py sahara/tests/unit/service/edp/test_job_utils.py sahara/tests/unit/service/edp/test_json_api_examples.py sahara/tests/unit/service/edp/test_s3_common.py sahara/tests/unit/service/edp/binary_retrievers/__init__.py sahara/tests/unit/service/edp/binary_retrievers/test_dispatch.py sahara/tests/unit/service/edp/binary_retrievers/test_internal_swift.py sahara/tests/unit/service/edp/binary_retrievers/test_manila.py sahara/tests/unit/service/edp/data_sources/__init__.py sahara/tests/unit/service/edp/data_sources/base_test.py sahara/tests/unit/service/edp/data_sources/data_source_manager_support_test.py sahara/tests/unit/service/edp/data_sources/hdfs/__init__.py sahara/tests/unit/service/edp/data_sources/hdfs/test_hdfs_type.py sahara/tests/unit/service/edp/data_sources/manila/__init__.py sahara/tests/unit/service/edp/data_sources/manila/test_manila_type.py sahara/tests/unit/service/edp/data_sources/maprfs/__init__.py sahara/tests/unit/service/edp/data_sources/maprfs/test_maprfs_type_validation.py sahara/tests/unit/service/edp/data_sources/s3/__init__.py sahara/tests/unit/service/edp/data_sources/s3/test_s3_type.py sahara/tests/unit/service/edp/data_sources/swift/__init__.py sahara/tests/unit/service/edp/data_sources/swift/test_swift_type.py sahara/tests/unit/service/edp/job_binaries/__init__.py sahara/tests/unit/service/edp/job_binaries/job_binary_manager_support.py sahara/tests/unit/service/edp/job_binaries/test_base.py sahara/tests/unit/service/edp/job_binaries/internal_db/__init__.py sahara/tests/unit/service/edp/job_binaries/internal_db/test_internal_db_type.py sahara/tests/unit/service/edp/job_binaries/manila/__init__.py sahara/tests/unit/service/edp/job_binaries/manila/test_manila_type.py sahara/tests/unit/service/edp/job_binaries/s3/__init__.py sahara/tests/unit/service/edp/job_binaries/s3/test_s3_type.py sahara/tests/unit/service/edp/job_binaries/swift/__init__.py sahara/tests/unit/service/edp/job_binaries/swift/test_swift_type.py sahara/tests/unit/service/edp/oozie/__init__.py sahara/tests/unit/service/edp/oozie/test_oozie.py sahara/tests/unit/service/edp/spark/__init__.py sahara/tests/unit/service/edp/spark/base.py sahara/tests/unit/service/edp/storm/__init__.py sahara/tests/unit/service/edp/storm/test_storm.py sahara/tests/unit/service/edp/utils/test_shares.py sahara/tests/unit/service/edp/workflow_creator/__init__.py sahara/tests/unit/service/edp/workflow_creator/test_create_workflow.py sahara/tests/unit/service/health/__init__.py sahara/tests/unit/service/health/test_verification_base.py sahara/tests/unit/service/heat/__init__.py sahara/tests/unit/service/heat/test_templates.py sahara/tests/unit/service/validation/__init__.py sahara/tests/unit/service/validation/test_add_tags_validation.py sahara/tests/unit/service/validation/test_cluster_create_validation.py sahara/tests/unit/service/validation/test_cluster_delete_validation.py sahara/tests/unit/service/validation/test_cluster_scaling_validation.py sahara/tests/unit/service/validation/test_cluster_template_create_validation.py sahara/tests/unit/service/validation/test_cluster_template_update_validation.py sahara/tests/unit/service/validation/test_cluster_update_validation.py sahara/tests/unit/service/validation/test_ng_template_validation_create.py sahara/tests/unit/service/validation/test_ng_template_validation_update.py sahara/tests/unit/service/validation/test_protected_validation.py sahara/tests/unit/service/validation/test_share_validations.py sahara/tests/unit/service/validation/test_validation.py sahara/tests/unit/service/validation/utils.py sahara/tests/unit/service/validation/edp/__init__.py sahara/tests/unit/service/validation/edp/test_data_source.py sahara/tests/unit/service/validation/edp/test_job.py sahara/tests/unit/service/validation/edp/test_job_binary.py sahara/tests/unit/service/validation/edp/test_job_binary_internal.py sahara/tests/unit/service/validation/edp/test_job_executor.py sahara/tests/unit/service/validation/edp/test_job_interface.py sahara/tests/unit/swift/__init__.py sahara/tests/unit/swift/test_swift_helper.py sahara/tests/unit/swift/test_utils.py sahara/tests/unit/topology/__init__.py sahara/tests/unit/topology/test_topology.py sahara/tests/unit/utils/__init__.py sahara/tests/unit/utils/test_api.py sahara/tests/unit/utils/test_api_validator.py sahara/tests/unit/utils/test_cinder.py sahara/tests/unit/utils/test_cluster.py sahara/tests/unit/utils/test_cluster_progress_ops.py sahara/tests/unit/utils/test_configs.py sahara/tests/unit/utils/test_crypto.py sahara/tests/unit/utils/test_edp.py sahara/tests/unit/utils/test_general.py sahara/tests/unit/utils/test_hacking.py sahara/tests/unit/utils/test_heat.py sahara/tests/unit/utils/test_neutron.py sahara/tests/unit/utils/test_patches.py sahara/tests/unit/utils/test_poll_utils.py sahara/tests/unit/utils/test_proxy.py sahara/tests/unit/utils/test_resources.py sahara/tests/unit/utils/test_rpc.py sahara/tests/unit/utils/test_ssh_remote.py sahara/tests/unit/utils/test_types.py sahara/tests/unit/utils/test_xml_utils.py sahara/tests/unit/utils/notification/__init__.py sahara/tests/unit/utils/notification/test_sender.py sahara/tests/unit/utils/openstack/__init__.py sahara/tests/unit/utils/openstack/test_base.py sahara/tests/unit/utils/openstack/test_heat.py sahara/tests/unit/utils/openstack/test_images.py sahara/tests/unit/utils/openstack/test_swift.py sahara/topology/__init__.py sahara/topology/topology_helper.py sahara/topology/resources/core-template.xml sahara/topology/resources/mapred-template.xml sahara/utils/__init__.py sahara/utils/api.py sahara/utils/api_validator.py sahara/utils/cluster.py sahara/utils/cluster_progress_ops.py sahara/utils/configs.py sahara/utils/crypto.py sahara/utils/edp.py sahara/utils/files.py sahara/utils/general.py sahara/utils/network.py sahara/utils/patches.py sahara/utils/poll_utils.py sahara/utils/procutils.py sahara/utils/proxy.py sahara/utils/remote.py sahara/utils/resources.py sahara/utils/rpc.py sahara/utils/ssh_remote.py sahara/utils/tempfiles.py sahara/utils/types.py sahara/utils/wsgi.py sahara/utils/xmlutils.py sahara/utils/hacking/__init__.py sahara/utils/hacking/checks.py sahara/utils/hacking/commit_message.py sahara/utils/hacking/logging_checks.py sahara/utils/notification/__init__.py sahara/utils/notification/sender.py sahara/utils/openstack/__init__.py sahara/utils/openstack/base.py sahara/utils/openstack/cinder.py sahara/utils/openstack/glance.py sahara/utils/openstack/heat.py sahara/utils/openstack/images.py sahara/utils/openstack/keystone.py sahara/utils/openstack/manila.py sahara/utils/openstack/neutron.py sahara/utils/openstack/nova.py sahara/utils/openstack/swift.py tools/cover.sh tools/lintstack.py tools/lintstack.sh tools/test-setup.sh tools/config/config-generator.sahara.conf tools/config/sahara-policy-generator.conf tools/gate/build-imagessahara-12.0.0/etc/0000775000175000017500000000000013656752227013645 5ustar zuulzuul00000000000000sahara-12.0.0/etc/sudoers.d/0000775000175000017500000000000013656752227015553 5ustar zuulzuul00000000000000sahara-12.0.0/etc/sudoers.d/sahara-rootwrap0000664000175000017500000000012113656752032020574 0ustar zuulzuul00000000000000sahara ALL=(root) NOPASSWD: /usr/bin/sahara-rootwrap /etc/sahara/rootwrap.conf * sahara-12.0.0/etc/edp-examples/0000775000175000017500000000000013656752227016231 5ustar zuulzuul00000000000000sahara-12.0.0/etc/edp-examples/README.rst0000664000175000017500000000027313656752032017714 0ustar zuulzuul00000000000000===================== Sahara files for EDP ===================== All files from this directory have been moved to new sahara-tests repository: https://opendev.org/openstack/sahara-tests sahara-12.0.0/etc/sahara/0000775000175000017500000000000013656752227015104 5ustar zuulzuul00000000000000sahara-12.0.0/etc/sahara/compute.topology.sample0000664000175000017500000000016613656752032021633 0ustar zuulzuul00000000000000edp-master-0001 /rack1 10.50.0.8 /rack1 edp-slave-0002 /rack1 10.50.0.5 /rack1 edp-slave-0001 /rack2 10.50.0.6 /rack2 sahara-12.0.0/etc/sahara/rootwrap.conf0000664000175000017500000000220613656752032017622 0ustar zuulzuul00000000000000# Configuration for sahara-rootwrap # This file should be owned by (and only-writable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writable by root ! filters_path=/etc/sahara/rootwrap.d,/usr/share/sahara/rootwrap # List of directories to search executables in, in case filters do not # explicitely specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, local0, local1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR [xenapi] # XenAPI configuration is only required by the L2 agent if it is to # target a XenServer/XCP compute host's dom0. xenapi_connection_url= xenapi_connection_username=root xenapi_connection_password= sahara-12.0.0/etc/sahara/swift.topology.sample0000664000175000017500000000003713656752032021310 0ustar zuulzuul0000000000000010.10.1.86 /rack1 swift1 /rack1sahara-12.0.0/etc/sahara/README-sahara.conf.txt0000664000175000017500000000020113656752032020746 0ustar zuulzuul00000000000000To generate the sample sahara.conf file, run the following command from the top level of the sahara directory: tox -e genconfig sahara-12.0.0/etc/sahara/rootwrap.d/0000775000175000017500000000000013656752227017203 5ustar zuulzuul00000000000000sahara-12.0.0/etc/sahara/rootwrap.d/sahara.filters0000664000175000017500000000014613656752032022027 0ustar zuulzuul00000000000000[Filters] ip: IpNetnsExecFilter, ip, root nc: CommandFilter, nc, root kill: CommandFilter, kill, root sahara-12.0.0/etc/sahara/api-paste.ini0000664000175000017500000000236213656752032017465 0ustar zuulzuul00000000000000[pipeline:sahara] pipeline = cors http_proxy_to_wsgi request_id versions acl auth_validator sahara_api [composite:sahara_api] use = egg:Paste#urlmap /: sahara_apiv2 # this app is given as a reference for v1-only deployments # [app:sahara_apiv11] # paste.app_factory = sahara.api.middleware.sahara_middleware:Router.factory [app:sahara_apiv2] paste.app_factory = sahara.api.middleware.sahara_middleware:RouterV2.factory [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = sahara [filter:request_id] paste.filter_factory = oslo_middleware.request_id:RequestId.factory [filter:acl] paste.filter_factory = keystonemiddleware.auth_token:filter_factory [filter:auth_validator] paste.filter_factory = sahara.api.middleware.auth_valid:AuthValidator.factory [filter:debug] paste.filter_factory = oslo_middleware.debug:Debug.factory [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware:HTTPProxyToWSGI.factory [filter:versions] paste.filter_factory = sahara.api.middleware.version_discovery:VersionResponseMiddlewareV2.factory # this filter is given as a reference for v1-only deployments #[filter:versions] #paste.filter_factory = sahara.api.middleware.version_discovery:VersionResponseMiddlewareV1.factory sahara-12.0.0/HACKING.rst0000664000175000017500000000241713656752032014666 0ustar zuulzuul00000000000000Sahara Style Commandments ========================= - Step 1: Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ - Step 2: Read on Sahara Specific Commandments ---------------------------- Commit Messages --------------- Using a common format for commit messages will help keep our git history readable. Follow these guidelines: - [S365] First, provide a brief summary of 50 characters or less. Summaries of greater than 72 characters will be rejected by the gate. - [S364] The first line of the commit message should provide an accurate description of the change, not just a reference to a bug or blueprint. Imports ------- - [S366, S367] Organize your imports according to the ``Import order`` Dictionaries/Lists ------------------ - [S360] Ensure default arguments are not mutable. - [S368] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs. For more information, please refer to http://legacy.python.org/dev/peps/pep-0274/ Logs ---- - [S373] Don't translate logs - [S374] You used a deprecated log level Importing json -------------- - [S375] It's more preferable to use ``jsonutils`` from ``oslo_serialization`` instead of ``json`` for operating with ``json`` objects. sahara-12.0.0/releasenotes/0000775000175000017500000000000013656752227015563 5ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/notes/0000775000175000017500000000000013656752227016713 5ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/notes/add-impala-2.2-c1649599649aff5c.yaml0000664000175000017500000000006013656752032024240 0ustar zuulzuul00000000000000--- features: - Add impala 2.2 to MapR plugin sahara-12.0.0/releasenotes/notes/apiv2-microversion-4c1a58ee8090e5a9.yaml0000664000175000017500000000025413656752032025552 0ustar zuulzuul00000000000000--- features: - | Users of Sahara's APIv2 may request a microversion of that API, with "OpenStack-API-Version: data-processing [version]" in the request headers. sahara-12.0.0/releasenotes/notes/add-upgrade-check-framework-9cd18dbc47b0efbd.yaml0000664000175000017500000000072313656752032027533 0ustar zuulzuul00000000000000--- prelude: > Added new tool ``sahara-status upgrade check``. features: - | New framework for ``sahara-status upgrade check`` command is added. This framework allows adding various checks which can be run before a Sahara upgrade to ensure if the upgrade can be performed safely. upgrade: - | Operator can now use new CLI tool ``sahara-status upgrade check`` to check if Sahara deployment can be safely upgraded from N-1 to N release. sahara-12.0.0/releasenotes/notes/remove_enable_notifications_opt-4c0d46e8e79eb06f.yaml0000664000175000017500000000030713656752032030516 0ustar zuulzuul00000000000000--- deprecations: - The 'enable' option of the 'oslo_messaging_notifications' section has been removed. To enable notifications now please specify the 'driver' option in the same section. sahara-12.0.0/releasenotes/notes/.placeholder0000664000175000017500000000000013656752032021156 0ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/notes/deprecate-storm-version-092.yaml-b9ff2b9ebbb983fc.yaml0000664000175000017500000000007113656752032030352 0ustar zuulzuul00000000000000--- deprecations: - Storm version 0.9.2 is deprecated. sahara-12.0.0/releasenotes/notes/api-v2-return-payload-a84a609db410228a.yaml0000664000175000017500000000020413656752032025751 0ustar zuulzuul00000000000000--- other: - As part of the APIv2 work we changed all tenant_id references to project_id on the return payload of REST calls. sahara-12.0.0/releasenotes/notes/updating-plugins-versions-b8d27764178c3cdd.yaml0000664000175000017500000000102213656752032027152 0ustar zuulzuul00000000000000--- prelude: > Every new release of Sahara we update our plugins list. Some new versions are added and some removed and other marked as deprecated. For Rocky we are deprecating CDH 5.7.0, Spark 1.6.0 and 2.1 as well as Storm 1.0.1. We are also removing CDH 5.5.0, MapR 5.1.0, Spark 1.3.1 and Storm 0.9.2. deprecations: - We are deprecating CDH 5.7.0, Spark 1.6.0 and 2.1 and Storm 1.0.1. upgrade: - We are removing some plugins versions. Those are CDH 5.5.0, MapR 5.1.0, Spark 1.3.1 and Storm 0.9.2. sahara-12.0.0/releasenotes/notes/spark-2.2-d7c3a84bd52f735a.yaml0000664000175000017500000000006613656752032023510 0ustar zuulzuul00000000000000--- features: - Adding Spark version 2.2 to Sahara. sahara-12.0.0/releasenotes/notes/hadoop-swift-domain-fix-c1dfdf6c52b5aa25.yaml0000664000175000017500000000026513656752032026656 0ustar zuulzuul00000000000000--- fixes: - Hadoop is now better configured to use the proper Keystone domain for interaction with Swift; previously the 'default' domain may have been incorrectly used. sahara-12.0.0/releasenotes/notes/drop-py-2-7-bc282e43b26fbf17.yaml0000664000175000017500000000031213656752032023754 0ustar zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. Last release of sahara to support python 2.7 is OpenStack Train. The minimum version of Python now supported by sahara is Python 3.6. sahara-12.0.0/releasenotes/notes/deprecate-mapr-51-090423438e3dda20.yaml0000664000175000017500000000022513656752032024745 0ustar zuulzuul00000000000000--- deprecations: - MapR 5.1.0.mrv2 is now deprecated and will be removed in Ocata release. It is recommended to use MapR 5.2.0.mrv2 instead. sahara-12.0.0/releasenotes/notes/plugins-split-from-sahara-core-9ffc5e5d06c9239c.yaml0000664000175000017500000000024613656752032030044 0ustar zuulzuul00000000000000--- features: - | In an effort to improve Sahara's usuability and manutenability we are splitting the plugins from Sahara core into their own repositories. sahara-12.0.0/releasenotes/notes/options-to-oslo_messaging_notifications-cee206fc4f74c217.yaml0000664000175000017500000000011613656752032032137 0ustar zuulzuul00000000000000--- upgrade: - Move notifications options into oslo_messaging_notifications sahara-12.0.0/releasenotes/notes/kerberos-76dd297462b7337c.yaml0000664000175000017500000000026013656752032023474 0ustar zuulzuul00000000000000--- features: - Kerberos support implemented for Cloudera and Ambari plugins. New oozie client implemented to support authentication for oozie in kerberized cluster. sahara-12.0.0/releasenotes/notes/force-delete-changes-2e0881a99742c339.yaml0000664000175000017500000000037013656752032025544 0ustar zuulzuul00000000000000--- features: - The behavior of force deletion of clusters (APIv2) has changed. Stack-abandon is no longer used. The response from the force-delete API call now includes the name of the stack which had underlain that deleted cluster. sahara-12.0.0/releasenotes/notes/force-delete-apiv2-e372392bbc8639f8.yaml0000664000175000017500000000025213656752032025320 0ustar zuulzuul00000000000000--- features: - The ability to force delete clusters is exposed in Sahara APIv2. The Heat service must support Stack Abandon for force delete to function properly. sahara-12.0.0/releasenotes/notes/enable-mutable-configuration-2dd6b7a0e0fe4437.yaml0000664000175000017500000000031313656752032027601 0ustar zuulzuul00000000000000--- features: - | Operators can now update the running configuration of Sahara processes by sending the parent process a "HUP" signal. Note: The configuration option must support mutation. sahara-12.0.0/releasenotes/notes/some-polish-api-v2-2d2e390a74b088f9.yaml0000664000175000017500000000134413656752032025271 0ustar zuulzuul00000000000000--- other: - Some polishings to APIv2 have been made in an effort to bring it from experimental (and therefore, evolving and unpredictable) to stable. More instances of `tenant_id` have been changed to `project_id`, in the cluster and job template APIs. `job_id` was changed to `job_template_id` in the job API. The newly-minted query string validation feature has been fixed to allow `show_progress` as a parameter on cluster GET; on a similar note some APIv2 endpoints which previously could be filtered by `hadoop_version` are now filtered by `plugin_version` instead. Also, the schema for cluster PATCH in APIv1.1 now no longer includes the key `update_keypair`; its prior inclusion was a mistake. sahara-12.0.0/releasenotes/notes/remove-hardcoded-password-from-hive-eb923b518974e853.yaml0000664000175000017500000000014413656752032030625 0ustar zuulzuul00000000000000--- fixes: - Fixed issues with hardcoded password during starting hive process, bug 1498035. sahara-12.0.0/releasenotes/notes/ambari-hive-92b911e0a759ee88.yaml0000664000175000017500000000007313656752032024127 0ustar zuulzuul00000000000000--- fixes: - Fixed launching Hive jobs in Ambari plugin. sahara-12.0.0/releasenotes/notes/novaclient_images_to_glanceclient-0266a2bd92b4be05.yaml0000664000175000017500000000010513656752032030667 0ustar zuulzuul00000000000000--- upgrade: - Migration from novaclient.v2.images to glanceclient sahara-12.0.0/releasenotes/notes/cdh-labels-5695d95bce226051.yaml0000664000175000017500000000036113656752032023651 0ustar zuulzuul00000000000000--- features: - Versions 5.5.0 and 5.7.0 of Cloudera plugin are declared as stable. deprecations: - Versions 5, 5.3.0, 5.4.0 of Cloudera plugin are deprecated. It is no longer maintainted and supposed to be removed in P release. sahara-12.0.0/releasenotes/notes/storm-1.2-af75fedb413de56a.yaml0000664000175000017500000000015413656752032023671 0ustar zuulzuul00000000000000--- upgrade: - Adding new versions of Storm, 1.2.0 and 1.2.1. Both will exist under the same tag 1.2. sahara-12.0.0/releasenotes/notes/ambari-server-start-856403bc280dfba3.yaml0000664000175000017500000000007613656752032025674 0ustar zuulzuul00000000000000--- fixes: - Starting Ambari clusters on Centos 7 is fixed. sahara-12.0.0/releasenotes/notes/vanilla-2.8.2-support-84c89aad31105584.yaml0000664000175000017500000000011113656752032025445 0ustar zuulzuul00000000000000--- features: - | Support deploy hadoop 2.8.2 with vanilla plugin. sahara-12.0.0/releasenotes/notes/deprecate-spark-version-131-98eccc79b13b6b8f.yaml0000664000175000017500000000007113656752032027220 0ustar zuulzuul00000000000000--- deprecations: - Spark version 1.3.1 is deprecated. sahara-12.0.0/releasenotes/notes/ambari-downscaling-b9ba759ce9c7325e.yaml0000664000175000017500000000007613656752032025647 0ustar zuulzuul00000000000000--- fixes: - Fixed incorrect down scaling of ambari cluster sahara-12.0.0/releasenotes/notes/cdh_5_11_0_image_generation_validation-6334ef6d04950935.yaml0000664000175000017500000000024013656752032031063 0ustar zuulzuul00000000000000--- features: - Enables the creation and validation of CDH 5.11.0 images using the new image generation process where libguestfs replaces the use of DIB. sahara-12.0.0/releasenotes/notes/boot-from-volume-e7078452fac1a4a0.yaml0000664000175000017500000000013613656752032025206 0ustar zuulzuul00000000000000--- features: - Adding the ability to boot a Sahara cluster from volumes instead of images. sahara-12.0.0/releasenotes/notes/remove_custom_auth_domainname-984fd2d931e306cc.yaml0000664000175000017500000000035513656752032030117 0ustar zuulzuul00000000000000--- deprecations: - The custom admin_user_domain_name and admin_project_domain_name configuration options have been removed; they are provided by keystone_authtoken as user_domain_name and project_domain_name respectively. sahara-12.0.0/releasenotes/notes/mapr5.2.0-image-gen-c850e74977b00abe.yaml0000664000175000017500000000015413656752032025160 0ustar zuulzuul00000000000000--- features: - Adding ability to create and validate MapR 5.2.0 images using the new image gen tool. sahara-12.0.0/releasenotes/notes/enhance-bfv-12bac06c4438675f.yaml0000664000175000017500000000036613656752032024104 0ustar zuulzuul00000000000000--- features: - In Sahara APIv2, the type, availability zone, and locality of boot volumes may be expressed explicitly through the `boot_volume_type`, `boot_volume_availability_zone`, and `boot_volume_local_to_instance` parameters. sahara-12.0.0/releasenotes/notes/strict-validation-query-string-a6cadbf2f9c57d06.yaml0000664000175000017500000000035613656752032030342 0ustar zuulzuul00000000000000--- other: - In APIv2 there is now strict checking of parameters in the query string. This means that unexpected values in the query string will give a 400 error (as opposed to previously being ignored, or causing a 500 error). sahara-12.0.0/releasenotes/notes/add-wsgi-server-support-c8fbc3d76d4e42f6.yaml0000664000175000017500000000020313656752032026667 0ustar zuulzuul00000000000000--- features: - Added support of running sahara-api as wsgi application. Use 'sahara-wsgi-api' command for use this feature. sahara-12.0.0/releasenotes/notes/add-mapr-sentry-6012c08b55d679de.yaml0000664000175000017500000000005413656752032024737 0ustar zuulzuul00000000000000--- features: - Add Sentry to MapR plugin sahara-12.0.0/releasenotes/notes/nova-network-removal-debe306fd7c61268.yaml0000664000175000017500000000055713656752032026176 0ustar zuulzuul00000000000000--- issues: - Ironic integration might be broken if floating IPs are used, due to the use of pre-created ports by the Sahara engine. The status of Ironic support was untested for this release. deprecations: - Support for nova-network is removed, reflective of its removal from nova itself and from python-novaclient. use_neutron=False is unsupported. sahara-12.0.0/releasenotes/notes/cdh-5-5-35e582e149a05632.yaml0000664000175000017500000000007013656752032022625 0ustar zuulzuul00000000000000--- features: - CDH 5.5.0 is supported in CDH plugin. sahara-12.0.0/releasenotes/notes/cdh_5_9_0_image_generation_validation-19d10e6468e30b4f.yaml0000664000175000017500000000024013656752032031142 0ustar zuulzuul00000000000000--- features: - Enables the creation and validation of CDH 5.9.0 images using the new image generation process where libguestfs replaces the use of DIB. sahara-12.0.0/releasenotes/notes/event_log_for_hdp-a114511c477ef16d.yaml0000664000175000017500000000006113656752032025377 0ustar zuulzuul00000000000000--- features: - Added event log for HDP plugin sahara-12.0.0/releasenotes/notes/ntp-config-51ed9d612132e2fa.yaml0000664000175000017500000000036013656752032024042 0ustar zuulzuul00000000000000--- fixes: - | This fixes the issue with NTP configuration where a prefered server provided by the user is added to the end of the file and the defaults are not deleted. Here we add the prefered server to the top of the file. sahara-12.0.0/releasenotes/notes/apiv2-preview-release-b1ee8cc9b2fb01da.yaml0000664000175000017500000000063013656752032026411 0ustar zuulzuul00000000000000--- features: - | Sahara's APIv2 is now exposed by default (although its state is still experimental). It has feature parity with Sahara's APIv1.1, but APIv2 brings better REST semantics, tweaks to some response payloads, and some other improvements. APIv2 will remain labeled experimental until it is stabilized following the addition of new features to it in the coming cycle(s). sahara-12.0.0/releasenotes/notes/remove-spark-100-44f3d5efc3806410.yaml0000664000175000017500000000010213656752032024626 0ustar zuulzuul00000000000000--- deprecations: - Removed support for the Spark 1.0.0 plugin. sahara-12.0.0/releasenotes/notes/neutron-default-a6baf93d857d86b3.yaml0000664000175000017500000000022413656752032025213 0ustar zuulzuul00000000000000--- upgrade: - Neutron is used by default now (use_neutron=True). Nova-network is not functionaly for most use cases starting from Ocata. sahara-12.0.0/releasenotes/notes/apiv2-stable-release-25ba9920c8e4632a.yaml0000664000175000017500000000013013656752032025625 0ustar zuulzuul00000000000000--- prelude: > - Sahara's APIv2 is now considered stable, and no longer experimental. sahara-12.0.0/releasenotes/notes/cdh_5_9_support-b603a2648b2e7b32.yaml0000664000175000017500000000007013656752032024720 0ustar zuulzuul00000000000000--- features: - CDH 5.9.0 is supported in CDH plugin. sahara-12.0.0/releasenotes/notes/ironic-support-79e7ecad05f54029.yaml0000664000175000017500000000020613656752032025011 0ustar zuulzuul00000000000000--- other: - We are assuring that ironic support was tested after latest updates to nova and sahara and it is fully functional. sahara-12.0.0/releasenotes/notes/support-s3-job-binary-6d91267ae11d09d3.yaml0000664000175000017500000000013313656752032026013 0ustar zuulzuul00000000000000--- features: - An EDP job binary may reference a file stored in a S3-like object store. sahara-12.0.0/releasenotes/notes/substring-matching-1d5981b8e5b1d919.yaml0000664000175000017500000000041013656752032025543 0ustar zuulzuul00000000000000--- fixes: - Add regular expression matching on search values for certain string fields of sahara objects. This applies to list operations through the REST API and therefore applies to the dashboard and sahara client as well. Closes bug 1503345. sahara-12.0.0/releasenotes/notes/vanilla-2.7.5-support-ffeeb88fc4be34b4.yaml0000664000175000017500000000011113656752032026117 0ustar zuulzuul00000000000000--- features: - | Support deploy hadoop 2.7.5 with vanilla plugin. sahara-12.0.0/releasenotes/notes/mapr-services-new-versions-dc7652e33f26bbdc.yaml0000664000175000017500000000022613656752032027362 0ustar zuulzuul00000000000000--- features: - The following service versions were added to MapR 5.2.0 plugin - Pig 0.16 - Spark 2.0.1 - Hue 3.10 - Drill 1.8, 1.9 sahara-12.0.0/releasenotes/notes/add_mapr_repo_configs-04af1a67350bfd24.yaml0000664000175000017500000000015413656752032026276 0ustar zuulzuul00000000000000--- features: - MapR repositories now can be configured in general section of cluster template configs sahara-12.0.0/releasenotes/notes/keypair-replacement-0c0cc3db0551c112.yaml0000664000175000017500000000016613656752032025716 0ustar zuulzuul00000000000000--- features: - | Use a new keypair to access to the running cluster when the cluster's keypair is deleted. sahara-12.0.0/releasenotes/notes/cdh-513-bdce0d5d269d8f20.yaml0000664000175000017500000000007613656752032023215 0ustar zuulzuul00000000000000--- features: - Adding support to CDH 5.13.0 in CDH plugin. sahara-12.0.0/releasenotes/notes/hdfs-dfs-94a9c4f64cf8994f.yaml0000664000175000017500000000020613656752032023535 0ustar zuulzuul00000000000000--- fixes: - | The command hdfs fs has been deprecated in favor of hdfs fs. This fixes will allow the use of Hbase service. sahara-12.0.0/releasenotes/notes/engine-opt-258ff1ae9b04d628.yaml0000664000175000017500000000012513656752032024056 0ustar zuulzuul00000000000000--- deprecations: - Option 'infrastructure engine' is removed from sahara configs. sahara-12.0.0/releasenotes/notes/transport_url-5bbbf0bb54d81727.yaml0000664000175000017500000000030213656752032024772 0ustar zuulzuul00000000000000--- features: - Separate transport url can be used for notifications purposes now, to enable this feature 'transport_url' should be provided in 'oslo_messaging_notifications' section. sahara-12.0.0/releasenotes/notes/api-insecure-cbd4fd5da71b29a3.yaml0000664000175000017500000000011413656752032024600 0ustar zuulzuul00000000000000--- fixes: - Fixed api_insecure handling in sessions. Closed bug 1539498. sahara-12.0.0/releasenotes/notes/zookeeper-configuration-steps-48c3d9706c86f227.yaml0000664000175000017500000000036213656752032027671 0ustar zuulzuul00000000000000--- prelude: > Documentation about distributed periodics are extended with steps about installation additional libs required for correct work of coordination backend. Please refer Advanced Configuration Guide for details. sahara-12.0.0/releasenotes/notes/refactor-floating-ips-logic-9d37d9297f3621b3.yaml0000664000175000017500000000026713656752032027162 0ustar zuulzuul00000000000000--- features: - Refactoring the logic on how floating ips are used by Sahara. This change will allow the coexistence of cluster using floating ips with cluster that do not. sahara-12.0.0/releasenotes/notes/policy_in_code-5847902775ff9861.yaml0000664000175000017500000000061213656752032024516 0ustar zuulzuul00000000000000--- features: - This feature allows the policy enforcement to be done in code thus facilitating better maintenance of the policy file. In code the default policies are set and the operator only needs to change the policy file if they wish to override the rule or role for a specific policy or operation. Also, a complete policy file can be generated using genconfig tool. sahara-12.0.0/releasenotes/notes/deprecate-cdh_5_5-0da56b562170566f.yaml0000664000175000017500000000010213656752032024775 0ustar zuulzuul00000000000000--- features: - Version 5.5.0 of Cloudera plugin is deprecated. sahara-12.0.0/releasenotes/notes/remove-cdh_5.0_5.3_5.4-b5f140e9b0233c07.yaml0000664000175000017500000000012113656752032025272 0ustar zuulzuul00000000000000--- features: - Versions 5.0.0 5.3.0 and 5.4.0 of Cloudera plugin are removed. sahara-12.0.0/releasenotes/notes/remove-hard-coded-oozie-password-b97475c8772aa1bd.yaml0000664000175000017500000000015713656752032030267 0ustar zuulzuul00000000000000--- fixes: - Fixed issues with hardcoded password during creation MySQL database for Oozie, bug 1541122. sahara-12.0.0/releasenotes/notes/honor-endpoint-type-neutron-4583128c383d9745.yaml0000664000175000017500000000015613656752032027137 0ustar zuulzuul00000000000000--- fixes: - Fixed issue with handling endpoint_type during creation neutron client, closed bug 1564805 sahara-12.0.0/releasenotes/notes/rack_awareness_for_cdh-e0cd5d4ab46aa1b5.yaml0000664000175000017500000000010713656752032026660 0ustar zuulzuul00000000000000--- features: - Added rack awareness feature for CDH 5.5 and CDH 5.7 sahara-12.0.0/releasenotes/notes/sahara-endpoint-version-discovery-826e9f31093cb10f.yaml0000664000175000017500000000064713656752032030502 0ustar zuulzuul00000000000000--- prelude: > - Sahara APIv2 is reaching a point of maturity. Therefore, new deployments should include an **unversioned** endpoint in the service catalog for the "data-processing" service, for the purposes of more intuitive version discovery. Eventually existing deployments should switch to an unversioned endpoint, too, but only after time is given for the use of older clients to be less likely. sahara-12.0.0/releasenotes/notes/mapr-labels-5cc318616db59403.yaml0000664000175000017500000000036613656752032024051 0ustar zuulzuul00000000000000--- features: - MapR 5.1.0.mrv2 is now Enabled. deprecations: - MapR 5.0.0.mrv2 is now Deprecated. It is not recommended for usage. It is better to use MapR 5.1.0.mrv2 instead. This version of plugin will be removed in Ocata release. sahara-12.0.0/releasenotes/notes/optional-project-id-apiv1-2e89756f6f16bd5e.yaml0000664000175000017500000000012113656752032026724 0ustar zuulzuul00000000000000--- other: - The presence of project ID in Sahara APIv1 paths is now optional. sahara-12.0.0/releasenotes/notes/auto_configs_for_hdp-011d460d37dcdf02.yaml0000664000175000017500000000020613656752032026146 0ustar zuulzuul00000000000000--- features: - Add ability to automatically generate better configurations for Ambari cluster by using 'ALWAYS_APPLY' strategy sahara-12.0.0/releasenotes/notes/mapr-health-check-2eba3d742a2b853f.yaml0000664000175000017500000000007513656752032025336 0ustar zuulzuul00000000000000--- features: - Custom health check is added to MapR pluginsahara-12.0.0/releasenotes/notes/trustee-conf-section-5994dcd48a9744d7.yaml0000664000175000017500000000053013656752032026032 0ustar zuulzuul00000000000000--- deprecations: - | The use of [keystone_authtoken] credentials for trust creation is now deprecated. Please use the new [trustee] config section. The options ``username``, ``password``, ``project_name``, ``user_domain_name``, ``project_domain_name``, and ``auth_url`` (with version) are obligatory within that section. sahara-12.0.0/releasenotes/notes/sahara-cfg-location-change-7b61454311b16ce8.yaml0000664000175000017500000000021613656752032026665 0ustar zuulzuul00000000000000--- upgrade: - | Sample configuration files previously installed in share/sahara will now be installed into etc/sahara instead. sahara-12.0.0/releasenotes/notes/spark-2.3-0277fe9feae6668a.yaml0000664000175000017500000000007513656752032023530 0ustar zuulzuul00000000000000--- upgrade: - Adding Spark 2.3 to supported plugins list. sahara-12.0.0/releasenotes/notes/add-mapr-520-3ed6cd0ae9688e17.yaml0000664000175000017500000000007113656752032024070 0ustar zuulzuul00000000000000--- features: - MaR 5.2.0 is supported in MapR plugin. sahara-12.0.0/releasenotes/notes/hdp-removed-from-defaults-31d1e1f15973b682.yaml0000664000175000017500000000025613656752032026631 0ustar zuulzuul00000000000000--- upgrade: - HDP plugin removed from default configuration list. End users who are using HDP should ensure that their configuration files continue to list "hdp". sahara-12.0.0/releasenotes/notes/hdp26-5a406d7066706bf1.yaml0000664000175000017500000000010713656752032022564 0ustar zuulzuul00000000000000--- features: - Implemented support of HDP 2.6 in the Ambari plugin. sahara-12.0.0/releasenotes/notes/designate-integration-784c5f7f29546015.yaml0000664000175000017500000000014113656752032026064 0ustar zuulzuul00000000000000--- features: - Added integration of Designate for hostname resolution through dns servers sahara-12.0.0/releasenotes/notes/ca-cert-fix-5c434a82f9347039.yaml0000664000175000017500000000015513656752032023700 0ustar zuulzuul00000000000000--- fixes: - CA certificate handling in keystone, nova, neutron and cinder clients are fixed (#330635) sahara-12.0.0/releasenotes/notes/ambari-agent-pkg-install-timeout-param-d50e5c15e06fa51e.yaml0000664000175000017500000000016313656752032031413 0ustar zuulzuul00000000000000--- features: - Adding the ability to change default timeout parameter for ambari agent package installation sahara-12.0.0/releasenotes/notes/add-mapr-kafka-3a808bbc1aa21055.yaml0000664000175000017500000000005313656752032024517 0ustar zuulzuul00000000000000--- features: - Add Kafka to MapR plugin sahara-12.0.0/releasenotes/notes/hadoop-swift-jar-for-ambari-4439913b01d42468.yaml0000664000175000017500000000012713656752032026701 0ustar zuulzuul00000000000000--- fixes: - This patch adds ability to work with swift by using Keystone API v3 sahara-12.0.0/releasenotes/notes/support-s3-data-source-a912e2cdf4cd51fb.yaml0000664000175000017500000000013413656752032026462 0ustar zuulzuul00000000000000--- features: - An EDP data source may reference a file stored in a S3-like object store. sahara-12.0.0/releasenotes/notes/mapr-remove-spark-standalone-293ca864de9a7848.yaml0000664000175000017500000000010413656752032027441 0ustar zuulzuul00000000000000--- features: - Remove support for Spark standalone in MapR pluginsahara-12.0.0/releasenotes/notes/add-scheduler-edp-job-9eda17dd174e53fa.yaml0000664000175000017500000000010013656752032026162 0ustar zuulzuul00000000000000--- features: - Add ability of scheduling EDP jobs for sahara sahara-12.0.0/releasenotes/notes/consolidate-cluster-creation-apiv2-5d5aceeb2e97c702.yaml0000664000175000017500000000033513656752032030754 0ustar zuulzuul00000000000000--- features: - The experimental APIv2 supports simultaneous creation of multiple clusters only through POST /v2/clusters (using the `count` parameter). The POST /v2/clusters/multiple endpoint has been removed. ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000sahara-12.0.0/releasenotes/notes/remove-upload-oozie-sharelib-step-in-vanilla-2.8.2-546b2026e2f5d557.yamlsahara-12.0.0/releasenotes/notes/remove-upload-oozie-sharelib-step-in-vanilla-2.8.2-546b2026e2f5d5570000664000175000017500000000033513656752032032047 0ustar zuulzuul00000000000000--- issues: - | Remove the step "upload httpclient to oozie/sharelib" in sahara code. User should use latest vanilla-2.8.2 image which is built on SIE "Change-ID: I3a25ee8c282849911089adf6c3593b1bb50fd067". sahara-12.0.0/releasenotes/notes/s3-datasource-protocol-d3abd0b22f653b3b.yaml0000664000175000017500000000013613656752032026441 0ustar zuulzuul00000000000000--- other: - | The URL of an S3 data source may have `s3://` or `s3a://`, equivalently. sahara-12.0.0/releasenotes/notes/cdh_5_11_support-10d4abb91bc4475f.yaml0000664000175000017500000000007113656752032025132 0ustar zuulzuul00000000000000--- features: - CDH 5.11.0 is supported in CDH plugin. sahara-12.0.0/releasenotes/notes/deprecate-sahara-all-entry-point-1446a00dab643b7b.yaml0000664000175000017500000000021413656752032030207 0ustar zuulzuul00000000000000--- deprecations: - The `sahara-all` entry point is now deprecated. Please use the sahara-api and sahara-engine entry points instead. sahara-12.0.0/releasenotes/notes/ambari_2_4_image_generation_validation-47eabb9fa90384c8.yaml0000664000175000017500000000024013656752032031560 0ustar zuulzuul00000000000000--- features: - Enables the creation and validation of Ambari 2.4 images using the new image generation process where libguestfs replaces the use of DIB. sahara-12.0.0/releasenotes/notes/add_kafka_in_cdh-774c7c051480c892.yaml0000664000175000017500000000007213656752032024765 0ustar zuulzuul00000000000000--- features: - Kafka was added in CDH 5.5 and CDH 5.7 sahara-12.0.0/releasenotes/notes/cdh_5_7_image_generation_validation-308e7529a9018663.yaml0000664000175000017500000000023713656752032030520 0ustar zuulzuul00000000000000--- features: - Enables the creation and validation of CDH 5.7.0 images using the new image generation process where libguestfs replaces the use of DIB. sahara-12.0.0/releasenotes/notes/remove-hdp-137d0ad3d2389b7a.yaml0000664000175000017500000000013613656752032024047 0ustar zuulzuul00000000000000--- deprecations: - Support of HDP 2.0.6 plugin was removed. Use Ambari plugin instead. sahara-12.0.0/releasenotes/notes/proxy-user-lowercase-f116f7b7e89274cb.yaml0000664000175000017500000000032013656752032026134 0ustar zuulzuul00000000000000--- upgrade: - | The default proxy role for Swift is now member instead of Member. Keystone now creates the former by default, even if the latter is recognized to be the same (case preserving). sahara-12.0.0/releasenotes/notes/apiv2-payload-tweaks-b73c20a35263d958.yaml0000664000175000017500000000073613656752032025617 0ustar zuulzuul00000000000000--- other: - A few responses in the experimental (but nearly-stable) APIv2 have been tweaked. To be specific, the key `hadoop_version` has been replaced with `plugin_version`, the key `job` has been replaced with `job_template`, the key `job_execution` has been replaced with `job`, and the key `oozie_job_id` has been replaced with `engine_job_id`. In fact, these changes were all previously partially implemented, and are now completely implemented. sahara-12.0.0/releasenotes/notes/cdh_5_7_support-9522cb9b4dce2378.yaml0000664000175000017500000000006613656752032025022 0ustar zuulzuul00000000000000--- features: - CDH 5.7 is supported in CDH plugin. sahara-12.0.0/releasenotes/notes/key_manager_integration-e32d141809c8cc46.yaml0000664000175000017500000000024613656752032026615 0ustar zuulzuul00000000000000--- features: - OpenStack Key Manager service can now be used by sahara to enable storage of sensitive information in an external service such as barbican. sahara-12.0.0/releasenotes/notes/hdp25-b35ef99c240fc127.yaml0000664000175000017500000000010713656752032022731 0ustar zuulzuul00000000000000--- features: - Implemented support of HDP 2.5 in the Ambari plugin. sahara-12.0.0/releasenotes/notes/deprecate-plugin-vanilla260-46e4b8fe96e8fe68.yaml0000664000175000017500000000007713656752032027233 0ustar zuulzuul00000000000000--- deprecations: - Removed support of Vanilla 2.6.0 plugin. sahara-12.0.0/releasenotes/notes/deprecate-hdp-a9ff0ecf6006da49.yaml0000664000175000017500000000023613656752032024656 0ustar zuulzuul00000000000000--- deprecations: - The HDP 2.0.6 plugin is deprecated in Mitaka release and will be removed in Newton release. Please, use the Ambari 2.3 instead. sahara-12.0.0/releasenotes/notes/remove-mapr-500-3df3041be99a864c.yaml0000664000175000017500000000010113656752032024536 0ustar zuulzuul00000000000000--- deprecations: - Removed support for the MapR 5.0.0 plugin. sahara-12.0.0/releasenotes/notes/fixing-policy-inconsistencies-984020000cc3882a.yaml0000664000175000017500000000106613656752032027533 0ustar zuulzuul00000000000000--- fixes: - | With APIv2 we detected some inconsistencies on the policies. This patch updates the policy to fix those incosistencies. other: - | All APIv2 policy names have been changed to the recommended format: specifically, changes to resource names (now _singular_, whereas previously they may have been _plural_, or otherwise inconsistent), action verbs (now fully independent of HTTP semantics) and overall formatting (hyphens replace underscores). Eventually, the remaining non-conforming policy names will be deprecated too. sahara-12.0.0/releasenotes/notes/fix-install-provision-events-c1bd2e05bf2be6bd.yaml0000664000175000017500000000011513656752032030050 0ustar zuulzuul00000000000000--- fixes: - Fix uncompleted event logs for Oozie and Drill in MapR plugin.sahara-12.0.0/releasenotes/notes/add-storm-version-1_1_0-3e10b34824706a62.yaml0000664000175000017500000000007413656752032025727 0ustar zuulzuul00000000000000--- features: - Storm 1.1.0 is supported in Storm plugin. sahara-12.0.0/releasenotes/notes/rack_awareness_for_hdp-6e3d44468cc141a5.yaml0000664000175000017500000000007613656752032026420 0ustar zuulzuul00000000000000--- features: - Added rack awareness feature for HDP plugin sahara-12.0.0/releasenotes/notes/config-groups-ambari-837de6d33eb0fa87.yaml0000664000175000017500000000015713656752032026114 0ustar zuulzuul00000000000000--- fixes: - After decommissioning hosts all assoicated configs groups will be removed in ambari plugin. sahara-12.0.0/releasenotes/notes/mapr-services-new-versions-b32c2e8fe07d1600.yaml0000664000175000017500000000025313656752032027211 0ustar zuulzuul00000000000000--- features: - The following service versions were added to MapR 5.2.0 plugin - HBase 1.1 - Drill 1.6 - Mahout 0.11 0.12 - Spark 1.6.1 - Impala 2.5 sahara-12.0.0/releasenotes/notes/remove-use-neutron-2499b661dce041d4.yaml0000664000175000017500000000025513656752032025507 0ustar zuulzuul00000000000000--- upgrade: - | Nova network has been fully removed from the OpenStack codebase, remove all instances of switches on use_neutron and the configuration value. sahara-12.0.0/releasenotes/notes/keystoneclient-to-keystonauth-migration-c75988975ad1a506.yaml0000664000175000017500000000016013656752032031705 0ustar zuulzuul00000000000000--- upgrade: - migration from keystoneclient to keystoneauth is done for using auth features of keystone. sahara-12.0.0/releasenotes/notes/ambari26-image-pack-88c9aad59bf635b2.yaml0000664000175000017500000000012613656752032025476 0ustar zuulzuul00000000000000--- features: - Adding the ability to create Ambari 2.6 images on sahara-image-pack sahara-12.0.0/releasenotes/notes/convert-to-cluster-template-43d502496d18625e.yaml0000664000175000017500000000014513656752032027162 0ustar zuulzuul00000000000000--- deprecations: - Convert to cluster template feature is no longer supported by all plugins. sahara-12.0.0/releasenotes/source/0000775000175000017500000000000013656752227017063 5ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/source/train.rst0000664000175000017500000000017613656752032020730 0ustar zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train sahara-12.0.0/releasenotes/source/_static/0000775000175000017500000000000013656752227020511 5ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/source/_static/.placeholder0000664000175000017500000000000013656752032022754 0ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/source/conf.py0000664000175000017500000001473413656752032020365 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Sahara Release Notes documentation build configuration file extensions = [ 'reno.sphinxext', 'openstackdocstheme' ] # openstackdocstheme options repository_name = 'openstack/sahara' use_storyboard = True # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2015, Sahara Developers' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'SaharaReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'SaharaReleaseNotes.tex', u'Sahara Release Notes Documentation', u'Sahara Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'saharareleasenotes', u'Sahara Release Notes Documentation', [u'Sahara Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'SaharaReleaseNotes', u'Sahara Release Notes Documentation', u'Sahara Developers', 'SaharaReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] sahara-12.0.0/releasenotes/source/mitaka.rst0000664000175000017500000000023213656752032021052 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka sahara-12.0.0/releasenotes/source/liberty.rst0000664000175000017500000000022213656752032021255 0ustar zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/liberty sahara-12.0.0/releasenotes/source/stein.rst0000664000175000017500000000022113656752032020724 0ustar zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein sahara-12.0.0/releasenotes/source/queens.rst0000664000175000017500000000022313656752032021104 0ustar zuulzuul00000000000000=================================== Queens Series Release Notes =================================== .. release-notes:: :branch: stable/queens sahara-12.0.0/releasenotes/source/unreleased.rst0000664000175000017500000000016013656752032021733 0ustar zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: sahara-12.0.0/releasenotes/source/rocky.rst0000664000175000017500000000022113656752032020731 0ustar zuulzuul00000000000000=================================== Rocky Series Release Notes =================================== .. release-notes:: :branch: stable/rocky sahara-12.0.0/releasenotes/source/index.rst0000664000175000017500000000030613656752032020715 0ustar zuulzuul00000000000000====================== Sahara Release Notes ====================== .. toctree:: :maxdepth: 1 unreleased train stein rocky queens pike ocata newton mitaka liberty sahara-12.0.0/releasenotes/source/ocata.rst0000664000175000017500000000023013656752032020671 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata sahara-12.0.0/releasenotes/source/newton.rst0000664000175000017500000000023213656752032021116 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton sahara-12.0.0/releasenotes/source/_templates/0000775000175000017500000000000013656752227021220 5ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/source/_templates/.placeholder0000664000175000017500000000000013656752032023463 0ustar zuulzuul00000000000000sahara-12.0.0/releasenotes/source/pike.rst0000664000175000017500000000021713656752032020537 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike sahara-12.0.0/bandit.yaml0000664000175000017500000001174713656752032015223 0ustar zuulzuul00000000000000# optional: after how many files to update progress #show_progress_every: 100 # optional: plugins directory name #plugins_dir: 'plugins' # optional: plugins discovery name pattern plugin_name_pattern: '*.py' # optional: terminal escape sequences to display colors #output_colors: # DEFAULT: '\033[0m' # HEADER: '\033[95m' # LOW: '\033[94m' # WARN: '\033[93m' # ERROR: '\033[91m' # optional: log format string #log_format: "[%(module)s]\t%(levelname)s\t%(message)s" # globs of files which should be analyzed include: - '*.py' - '*.pyw' # a list of strings, which if found in the path will cause files to be excluded # for example /tests/ - to remove all all files in tests directory exclude_dirs: profiles: sahara_default: include: - hardcoded_password_string - hardcoded_password_funcarg # - hardcoded_password_default - blacklist_calls - blacklist_imports - subprocess_popen_with_shell_equals_true - subprocess_without_shell_equals_true - any_other_function_with_shell_equals_true - start_process_with_a_shell - start_process_with_no_shell - hardcoded_sql_expressions - jinja2_autoescape_false - use_of_mako_templates blacklist_calls: bad_name_sets: - pickle: qualnames: [pickle.loads, pickle.load, pickle.Unpickler, cPickle.loads, cPickle.load, cPickle.Unpickler] message: "Pickle library appears to be in use, possible security issue." - marshal: qualnames: [marshal.load, marshal.loads] message: "Deserialization with the marshal module is possibly dangerous." - md5: qualnames: [hashlib.md5] message: "Use of insecure MD5 hash function." - mktemp_q: qualnames: [tempfile.mktemp] message: "Use of insecure and deprecated function (mktemp)." - eval: qualnames: [eval] message: "Use of possibly insecure function - consider using safer ast.literal_eval." - mark_safe: qualnames: [mark_safe] message: "Use of mark_safe() may expose cross-site scripting vulnerabilities and should be reviewed." - httpsconnection: qualnames: [httplib.HTTPSConnection] message: "Use of HTTPSConnection does not provide security, see https://wiki.openstack.org/wiki/OSSN/OSSN-0033" - yaml_load: qualnames: [yaml.load] message: "Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider yaml.safe_load()." - urllib_urlopen: qualnames: [urllib.urlopen, urllib.urlretrieve, urllib.URLopener, urllib.FancyURLopener, urllib2.urlopen, urllib2.Request] message: "Audit url open for permitted schemes. Allowing use of file:/ or custom schemes is often unexpected." shell_injection: # Start a process using the subprocess module, or one of its wrappers. subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output, utils.execute, utils.execute_with_timeout] # Start a process with a function vulnerable to shell injection. shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3, popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput] # Start a process with a function that is not vulnerable to shell injection. no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv,os.execve, os.execvp, os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve, os.spawnvp, os.spawnvpe, os.startfile] blacklist_imports: bad_import_sets: - telnet: imports: [telnetlib] level: ERROR message: "Telnet is considered insecure. Use SSH or some other encrypted protocol." - info_libs: imports: [pickle, cPickle, subprocess, Crypto] level: LOW message: "Consider possible security implications associated with {module} module." hardcoded_tmp_directory: tmp_dirs: [/tmp, /var/tmp, /dev/shm] hardcoded_password: word_list: "wordlist/default-passwords" ssl_with_bad_version: bad_protocol_versions: - 'PROTOCOL_SSLv2' - 'SSLv2_METHOD' - 'SSLv23_METHOD' - 'PROTOCOL_SSLv3' # strict option - 'PROTOCOL_TLSv1' # strict option - 'SSLv3_METHOD' # strict option - 'TLSv1_METHOD' # strict option password_config_option_not_marked_secret: function_names: - oslo.config.cfg.StrOpt - oslo_config.cfg.StrOpt execute_with_run_as_root_equals_true: function_names: - ceilometer.utils.execute - cinder.utils.execute - neutron.agent.linux.utils.execute - nova.utils.execute - nova.utils.trycmd try_except_pass: check_typed_exception: True sahara-12.0.0/api-ref/0000775000175000017500000000000013656752227014415 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/0000775000175000017500000000000013656752227015715 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/0000775000175000017500000000000013656752227016244 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/node-group-templates.inc0000664000175000017500000001434413656752032023012 0ustar zuulzuul00000000000000.. -*- rst -*- ==================== Node group templates ==================== A cluster is a group of nodes with the same configuration. A node group template configures a node in the cluster. A template configures Hadoop processes and VM characteristics, such as the number of reduced slots for task tracker, the number of CPUs, and the amount of RAM. The template specifies the VM characteristics through an OpenStack flavor. List node group templates ========================= .. rest_method:: GET /v2/node-group-templates Lists available node group templates. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_by: sort_by_node_group_templates Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - volume_local_to_instance: volume_local_to_instance - availability_zone: availability_zone - updated_at: updated_at - use_autoconfig: use_autoconfig - volumes_per_node: volumes_per_node - id: node_group_template_id - security_groups: security_groups - shares: object_shares - node_configs: node_configs - auto_security_group: auto_security_group - volumes_availability_zone: volumes_availability_zone - description: node_group_template_description - volume_mount_prefix: volume_mount_prefix - plugin_name: plugin_name - floating_ip_pool: floating_ip_pool - is_default: is_default - image_id: image_id - volumes_size: volumes_size - is_proxy_gateway: is_proxy_gateway - is_public: object_is_public - plugin_version: plugin_version - name: node_group_template_name - project_id: project_id - created_at: created_at - volume_type: volume_type - is_protected: object_is_protected - node_processes: node_processes - flavor_id: flavor_id Response Example ---------------- .. rest_method:: GET /v2/node-group-templates?limit=2&marker=38b4e146-1d39-4822-bad2-fef1bf304a52&sort_by=name .. literalinclude:: samples/node-group-templates/node-group-templates-list-response.json :language: javascript Create node group template ========================== .. rest_method:: POST /v2/node-group-templates Creates a node group template. Normal response codes: 202 Request Example --------------- .. literalinclude:: samples/node-group-templates/node-group-template-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - volume_local_to_instance: volume_local_to_instance - availability_zone: availability_zone - updated_at: updated_at - use_autoconfig: use_autoconfig - volumes_per_node: volumes_per_node - id: node_group_template_id - security_groups: security_groups - shares: object_shares - node_configs: node_configs - auto_security_group: auto_security_group - volumes_availability_zone: volumes_availability_zone - description: node_group_template_description - volume_mount_prefix: volume_mount_prefix - plugin_name: plugin_name - floating_ip_pool: floating_ip_pool - is_default: is_default - image_id: image_id - volumes_size: volumes_size - is_proxy_gateway: is_proxy_gateway - is_public: object_is_public - plugin_version: plugin_version - name: node_group_template_name - project_id: project_id - created_at: created_at - volume_type: volume_type - is_protected: object_is_protected - node_processes: node_processes - flavor_id: flavor_id Show node group template details ================================ .. rest_method:: GET /v2/node-group-templates/{node_group_template_id} Shows a node group template, by ID. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - node_group_template_id: url_node_group_template_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - volume_local_to_instance: volume_local_to_instance - availability_zone: availability_zone - updated_at: updated_at - use_autoconfig: use_autoconfig - volumes_per_node: volumes_per_node - id: node_group_template_id - security_groups: security_groups - shares: object_shares - node_configs: node_configs - auto_security_group: auto_security_group - volumes_availability_zone: volumes_availability_zone - description: node_group_template_description - volume_mount_prefix: volume_mount_prefix - plugin_name: plugin_name - floating_ip_pool: floating_ip_pool - is_default: is_default - image_id: image_id - volumes_size: volumes_size - is_proxy_gateway: is_proxy_gateway - is_public: object_is_public - plugin_version: plugin_version - name: node_group_template_name - project_id: project_id - created_at: created_at - volume_type: volume_type - is_protected: object_is_protected - node_processes: node_processes - flavor_id: flavor_id Response Example ---------------- .. literalinclude:: samples/node-group-templates/node-group-template-show-response.json :language: javascript Delete node group template ========================== .. rest_method:: DELETE /v2/node-group-templates/{node_group_template_id} Deletes a node group template. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - node_group_template_id: url_node_group_template_id Update node group template ========================== .. rest_method:: PATCH /v2/node-group-templates/{node_group_template_id} Updates a node group template. Normal respose codes:202 Request ------- .. rest_parameters:: parameters.yaml - node_group_template_id: url_node_group_template_id Request Example --------------- .. literalinclude:: samples/node-group-templates/node-group-template-update-request.json :language: javascript Export node group template ========================== .. rest_method:: GET /v2/node-group-templates/{node_group_template_id}/export Exports a node group template. Normal respose codes:202 Request ------- .. rest_parameters:: parameters.yaml - node_group_template_id: url_node_group_template_id Request Example --------------- .. literalinclude:: samples/node-group-templates/node-group-template-update-request.json :language: javascript sahara-12.0.0/api-ref/source/v2/job-templates.inc0000664000175000017500000001011213656752032021472 0ustar zuulzuul00000000000000.. -*- rst -*- ============= Job templates ============= A job templates object lists the binaries that a job needs to run. To run a job, you must specify data sources and job parameters. You can run a job on an existing or new transient cluster. List job templates ================== .. rest_method:: GET /v2/job-templates Lists all job templates. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_by: sort_by_job_templates Response Parameters ------------------- .. rest_parameters:: parameters.yaml - job_templates: job_templates - description: job_description - project_id: project_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_template_id - name: job_template_name - markers: markers - prev: prev - next: next Response Example ---------------- ..rest_method:: GET /v2/job-templates?limit=2 .. literalinclude:: samples/job-templates/job-templates-list-response.json :language: javascript Create job template =================== .. rest_method:: POST /v2/job-templates Creates a job object. Normal response codes:202 Request Example --------------- .. literalinclude:: samples/job-templates/job-template-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_description - project_id: project_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_template_id - name: job_template_name Show job template details ========================= .. rest_method:: GET /v2/job-templates/{job_template_id} Shows details for a job template. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - job_template_id: url_job_template_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_description - project_id: project_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_template_id - name: job_template_name Response Example ---------------- .. literalinclude:: samples/job-templates/job-template-show-response.json :language: javascript Remove job template =================== .. rest_method:: DELETE /v2/job-templates/{job_template_id} Removes a job. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - job_template_id: url_job_template_id Update job template object ========================== .. rest_method:: PATCH /v2/job-templates/{job_template_id} Updates a job template object. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - job_template_id: url_job_template_id Request Example --------------- .. literalinclude:: samples/job-templates/job-template-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_description - project_id: project_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_template_id - name: job_template_name Get job template config hints ============================= .. rest_method:: GET /v2/job-templates/config-hints/{job_type} Get job template config hints Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - job_type: url_job_type Response Parameters ------------------- .. rest_parameters:: parameters.yaml - job_config: job_config - args: args - configs: configs sahara-12.0.0/api-ref/source/v2/data-sources.inc0000664000175000017500000000613513656752032021330 0ustar zuulzuul00000000000000.. -*- rst -*- ============ Data sources ============ A data source object defines the location of input or output for MapReduce jobs and might reference different types of storage. The Data Processing service does not validate data source locations. Show data source details ======================== .. rest_method:: GET /v2/data-sources/{data_source_id} Shows details for a data source. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - data_source_id: url_data_source_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: data_source_description - url: url - project_id: project_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - type: type - id: data_source_id - name: data_source_name Response Example ---------------- .. literalinclude:: samples/data-sources/data-source-show-response.json :language: javascript Delete data source ================== .. rest_method:: DELETE /v2/data-sources/{data_source_id} Deletes a data source. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - data_source_id: url_data_source_id Update data source ================== .. rest_method:: PATCH /v2/data-sources/{data_source_id} Updates a data source. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - data_source_id: url_data_source_id Request Example --------------- .. literalinclude:: samples/data-sources/data-source-update-request.json :language: javascript List data sources ================= .. rest_method:: GET /v2/data-sources Lists all data sources. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_by: sort_by_data_sources Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - description: data_source_description - url: url - project_id: project_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - type: type - id: data_source_id - name: data_source_name Response Example ---------------- .. rest_method:: GET /v2/data-sourses?sort_by=-name .. literalinclude:: samples/data-sources/data-sources-list-response.json :language: javascript Create data source ================== .. rest_method:: POST /v2/data-sources Creates a data source. Normal response codes:202 Request Example --------------- .. literalinclude:: samples/data-sources/data-source-register-hdfs-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: data_source_description - url: url - project_id: project_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - type: type - id: data_source_id - name: data_source_name sahara-12.0.0/api-ref/source/v2/plugins.inc0000664000175000017500000000504713656752032020420 0ustar zuulzuul00000000000000.. -*- rst -*- ======= Plugins ======= A plugin object defines the Hadoop or Spark version that it can install and which configurations can be set for the cluster. Show plugin details =================== .. rest_method:: GET /v2/plugins/{plugin_name} Shows details for a plugin. Normal response codes: 200 Error response codes: 400, 500 Request ------- .. rest_parameters:: parameters.yaml - plugin_name: url_plugin_name Response Parameters ------------------- .. rest_parameters:: parameters.yaml - versions: versions - title: title - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugin-show-response.json :language: javascript List plugins ============ .. rest_method:: GET /v2/plugins Lists all registered plugins. Normal response codes: 200 Error response codes: 400, 500 Response Parameters ------------------- .. rest_parameters:: parameters.yaml - title: title - versions: versions - plugins: plugins - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugins-list-response.json :language: javascript Show plugin version details =========================== .. rest_method:: GET /v2/plugins/{plugin_name}/{version} Shows details for a plugin version. Normal response codes: 200 Error response codes: 400, 500 Request ------- .. rest_parameters:: parameters.yaml - plugin_name: url_plugin_name - version: version Response Parameters ------------------- .. rest_parameters:: parameters.yaml - versions: versions - title: title - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugin-version-show-response.json :language: javascript Update plugin details ===================== .. rest_method:: PATCH /v2/plugins/{plugin_name} Updates details for a plugin. Normal response codes: 202 Error response codes: 400, 500 Request ------- .. rest_parameters:: parameters.yaml - plugin_name: url_plugin_name Request Example --------------- .. literalinclude:: samples/plugins/plugin-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - title: title - versions: versions - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugin-update-response.json :language: javascript sahara-12.0.0/api-ref/source/v2/image-registry.inc0000664000175000017500000000710313656752032021662 0ustar zuulzuul00000000000000.. -*- rst -*- ============== Image registry ============== Use the image registry tool to manage images, add tags to and remove tags from images, and define the user name for an instance operating system. Each plugin lists required tags for an image. To run remote operations, the Data Processing service requires a user name with which to log in to the operating system for an instance. Add tags to image ================= .. rest_method:: PUT /v2/images/{image_id}/tags Adds tags to an image. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - tags: tags - image_id: url_image_id Request Example --------------- .. literalinclude:: samples/image-registry/image-tags-add-request.json :language: javascript Show image details ================== .. rest_method:: GET /v2/images/{image_id} Shows details for an image. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - image_id: url_image_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - status: status - username: username - updated: updated - description: image_description - created: created - image: image - tags: tags - minDisk: minDisk - name: image_name - progress: progress - minRam: minRam - id: image_id - metadata: metadata Response Example ---------------- .. literalinclude:: samples/image-registry/image-show-response.json :language: javascript Register image ============== .. rest_method:: POST /v2/images/{image_id} Registers an image in the registry. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - username: username - description: image_description - image_id: url_image_id Request Example --------------- .. literalinclude:: samples/image-registry/image-register-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - status: status - username: username - updated: updated - description: image_description - created: created - image: image - tags: tags - minDisk: minDisk - name: image_name - progress: progress - minRam: minRam - id: image_id - metadata: metadata Unregister image ================ .. rest_method:: DELETE /v2/images/{image_id} Removes an image from the registry. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - image_id: url_image_id Remove tags from image ====================== .. rest_method:: DELETE /v2/images/{image_id}/tag Removes tags from an image. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - tags: tags - image_id: url_image_id Request Example --------------- .. literalinclude:: samples/image-registry/image-tags-delete-request.json :language: javascript List images =========== .. rest_method:: GET /v2/images Lists all images registered in the registry. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - tags: tags Response Parameters ------------------- .. rest_parameters:: parameters.yaml - status: status - username: username - updated: updated - description: image_description - created: created - image: image - tags: tags - minDisk: minDisk - name: image_name - images: images - progress: progress - minRam: minRam - id: image_id - metadata: metadata Response Example ---------------- .. literalinclude:: samples/image-registry/images-list-response.json :language: javascript sahara-12.0.0/api-ref/source/v2/parameters.yaml0000664000175000017500000005743713656752032021305 0ustar zuulzuul00000000000000# variables in header Content-Length: description: | The length of the data, in bytes. in: header required: true type: string # variables in path hints: description: | Includes configuration hints in the response. in: path required: false type: boolean job_binary_id: description: | The UUID of the job binary. in: path required: true type: string limit: description: | Maximum number of objects in response data. in: path required: false type: integer marker: description: | ID of the last element on the list which won't be in response. in: path required: false type: string plugin: description: | Filters the response by a plugin name. in: path required: false type: string sort_by_cluster_templates: description: | The field for sorting cluster templates. this parameter accepts the following values: ``name``, ``plugin_name``, ``plugin_version``, ``created_at``, ``updated_at``, ``id``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_clusters: description: | The field for sorting clusters. this parameter accepts the following values: ``name``, ``plugin_name``, ``plugin_version``, ``status``, ``id``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_data_sources: description: | The field for sorting data sources. this parameter accepts the following values: ``id``, ``name``, ``type``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_job: description: | The field for sorting job executions. this parameter accepts the following values: ``id``, ``job_template``, ``cluster``, ``status``. Also this values can started with ``-`` prefix for descending sort. For example: ``-cluster``. in: path required: false type: string sort_by_job_binary: description: | The field for sorting job binaries. this parameter accepts the following values: ``id``, ``name``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_job_binary_internals: description: | The field for sorting job binary internals. this parameter accepts the following values: ``id``, ``name``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_job_templates: description: | The field for sorting jobs. this parameter accepts the following values: ``id``, ``name``, ``type``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_node_group_templates: description: | The field for sorting node group templates. this parameter accepts the following values: ``name``, ``plugin_name``, ``plugin_version``, ``created_at``, ``updated_at``, ``id``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string type_2: description: | Filters the response by a job type. in: path required: false type: string url_cluster_id: description: | The ID of the cluster in: path required: true type: string url_cluster_template_id: description: | The unique identifier of the cluster template. in: path required: true type: string url_data_source_id: description: | The UUID of the data source. in: path required: true type: string url_image_id: description: | The UUID of the image. in: path required: true type: string url_job_binary_id: description: | The UUID of the job binary. in: path required: true type: string url_job_binary_internals_id: description: | The UUID of the job binary internal. in: path required: true type: string url_job_binary_internals_name: description: | The name of the job binary internal. in: path required: true type: string url_job_id: description: | The UUID of the job. in: path required: true type: string url_job_template_id: description: | The UUID of the template job. in: path required: true type: string url_job_type: description: | The job type. in: path required: true type: string url_node_group_template_id: description: | The UUID of the node group template. in: path required: true type: string url_plugin_name: description: | Name of the plugin. in: path required: true type: string url_project_id: description: | UUID of the project. in: path required: true type: string version: description: | Filters the response by a plugin version. in: path required: true type: string version_1: description: | Version of the plugin. in: path required: false type: string # variables in body args: description: | The list of arguments. in: body required: true type: array auto_security_group: description: | If set to ``True``, the cluster group is automatically secured. in: body required: true type: boolean availability_zone: description: | The availability of the node in the cluster. in: body required: true type: string binaries: description: | The list of job binary internal objects. in: body required: true type: array cluster_configs: description: | A set of key and value pairs that contain the cluster configuration. in: body required: true type: object cluster_id: description: | The UUID of the cluster. in: body required: true type: string cluster_template_description: description: | Description of the cluster template in: body required: false type: string cluster_template_id: description: | The UUID of the cluster template. in: body required: true type: string cluster_template_name: description: | The name of the cluster template. in: body required: true type: string clusters: description: | The list of clusters. in: body required: true type: array configs: description: | The mappings of the job tasks. in: body required: true type: object count: description: | The number of nodes in the cluster. in: body required: true type: integer created: description: | The date and time when the image was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. in: body required: true type: string created_at: description: | The date and time when the cluster was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string created_at_1: description: | The date and time when the object was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string created_at_2: description: | The date and time when the node was created in the cluster. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string created_at_3: description: | The date and time when the job execution object was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string data_source_description: description: | The description of the data source object. in: body required: true type: string data_source_id: description: | The UUID of the data source. in: body required: true type: string data_source_name: description: | The name of the data source. in: body required: true type: string data_source_urls: description: | The data source URLs. in: body required: true type: object datasize: description: | The size of the data stored in the internal database. in: body required: true type: integer default_image_id: description: | The default ID of the image. in: body required: true type: string description: description: | The description of the cluster. in: body required: true type: string description_3: description: | The description of the node in the cluster. in: body required: true type: string description_7: description: | Description of the image. in: body required: false type: string description_plugin: description: | The full description of the plugin. in: body required: true type: string domain_name: description: | Domain name for internal and external hostname resolution. Required if DNS service is enabled. in: body required: false type: string end_time: description: | The end date and time of the job execution. The date and time when the job completed execution. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string flavor_id: description: | The ID of the flavor. in: body required: true type: string floating_ip_pool: description: | The UUID of the pool in the template. in: body required: true type: string force: description: | If set to ``true``, Sahara will force cluster deletion. in: body required: false type: boolean id: description: | The UUID of the cluster. in: body required: true type: string id_1: description: | The ID of the object. in: body required: true type: string image: description: | A set of key and value pairs that contain image properties. in: body required: true type: object image_description: description: | The description of the image. in: body required: true type: string image_id: description: | The UUID of the image. in: body required: true type: string image_name: description: | The name of the operating system image. in: body required: true type: string images: description: | The list of images and their properties. in: body required: true type: array info: description: | A set of key and value pairs that contain cluster information. in: body required: true type: object info_1: description: | The report of the executed job objects. in: body required: true type: object input_id: description: | The UUID of the input. in: body required: true type: string interface: description: | The interfaces of the job object. in: body required: true type: array is_default: description: | If set to ``true``, the cluster is the default cluster. in: body required: true type: boolean is_protected: description: | If set to ``true``, the cluster is protected. in: body required: true type: boolean is_protected_2: description: | If set to ``true``, the node is protected. in: body required: true type: boolean is_protected_3: description: | If set to ``true``, the job execution object is protected. in: body required: true type: boolean is_proxy_gateway: description: | If set to ``true``, the node is the proxy gateway. in: body required: true type: boolean is_public: description: | If set to ``true``, the cluster is public. in: body required: true type: boolean is_transient: description: | If set to ``true``, the cluster is transient. in: body required: true type: boolean job: description: | A set of key and value pairs that contain the job object. in: body required: true type: object job_binary_description: description: | The description of the job binary object. in: body required: true type: string job_binary_internals_id: description: | The UUID of the job binary internal. in: body required: true type: string job_binary_internals_name: description: | The name of the job binary internal. in: body required: true type: string job_binary_name: description: | The name of the object. in: body required: true type: string job_config: description: | The job configuration. in: body required: true type: string job_description: description: | The description of the job object. in: body required: true type: string job_id: description: | The UUID of the job object. in: body required: true type: string job_is_public: description: | If set to ``true``, the job object is public. in: body required: true type: boolean job_name: description: | The name of the job object. in: body required: true type: string job_template_id: description: | The UUID of the job template object. in: body required: true type: string job_template_name: description: | The name of the job template object. in: body required: true type: string job_templates: description: | The list of the job templates. in: body required: true type: array job_types: description: | The list of plugins and their job types. in: body required: true type: array jobs: description: | The list of job objects. in: body required: true type: array libs: description: | The list of the job object properties. in: body required: true type: array mains: description: | The list of the job object and their properties. in: body required: true type: array management_public_key: description: | The SSH key for the management network. in: body required: true type: string markers: description: | The markers of previous and following pages of data. This field exists only if ``limit`` is passed to request. in: body required: false type: object metadata: description: | A set of key and value pairs that contain image metadata. in: body required: true type: object minDisk: description: | The minimum disk space, in GB. in: body required: true type: integer minRam: description: | The minimum amount of random access memory (RAM) for the image, in GB. in: body required: true type: integer name: description: | The name of the cluster. in: body required: true type: string name_1: description: | The name of the object. in: body required: true type: string neutron_management_network: description: | The UUID of the neutron management network. in: body required: true type: string next: description: | The marker of next page of list data. in: body required: false type: string node_configs: description: | A set of key and value pairs that contain the node configuration in the cluster. in: body required: true type: object node_group_template_description: description: | Description of the node group template in: body required: false type: string node_group_template_id: description: | The UUID of the node group template. in: body required: true type: string node_group_template_name: description: | The name of the node group template. in: body required: true type: string node_groups: description: | The detail properties of the node in key-value pairs. in: body required: true type: object node_processes: description: | The list of the processes performed by the node. in: body required: true type: array object_is_protected: description: | If set to ``true``, the object is protected. in: body required: true type: boolean object_is_public: description: | If set to ``true``, the object is public. in: body required: true type: boolean object_shares: description: | The sharing of resources in the cluster. in: body required: true type: string oozie_job_id: description: | The UUID of the ``oozie_job``. in: body required: true type: string output_id: description: | The UUID of the output of job execution object. in: body required: true type: string params: description: | The mappings of values to the parameters. in: body required: true type: object plugin_name: description: | The name of the plugin. in: body required: true type: string plugin_version: description: | The version of the Plugin used in the cluster. in: body required: true type: string plugin_version_1: description: | The version of the Plugin. in: body required: true type: string plugins: description: | The list of plugins. in: body required: true type: array prev: description: | The marker of previous page. May be ``null`` if previous page is first or if current page is first. in: body required: false type: string progress: description: | A progress indicator, as a percentage value, for the amount of image content that has been processed. in: body required: true type: integer project_id: description: | The UUID of the project. in: body required: true type: string provision_progress: description: | A list of the cluster progresses. in: body required: true type: array return_code: description: | The code returned after job has executed. in: body required: true type: string security_groups: description: | The security groups of the node. in: body required: true type: string shares: description: | The shares of the cluster. in: body required: true type: string start_time: description: | The date and time when the job started. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string status: description: | The status of the cluster. in: body required: true type: string status_1: description: | The current status of the image. in: body required: true type: string status_description: description: | The description of the cluster status. in: body required: true type: string tags: description: | List of tags to add. in: body required: true type: array tags_1: description: | Lists images only with specific tag. Can be used multiple times. in: body required: false type: string tags_2: description: | One or more image tags. in: body required: true type: array tags_3: description: | List of tags to remove. in: body required: true type: array tenant_id: description: | The UUID of the tenant. in: body required: true type: string title: description: | The title of the plugin. in: body required: true type: string trust_id: description: | The id of the trust. in: body required: true type: integer type: description: | The type of the data source object. in: body required: true type: string type_1: description: | The type of the job object. in: body required: true type: string updated: description: | The date and time when the image was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. in: body required: true type: string updated_at: description: | The date and time when the cluster was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string updated_at_1: description: | The date and time when the object was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string updated_at_2: description: | The date and time when the node was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string updated_at_3: description: | The date and time when the job execution object was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string url: description: | The url of the data source object. in: body required: true type: string url_1: description: | The url of the job binary object. in: body required: true type: string use_autoconfig: description: | If set to ``true``, the cluster is auto configured. in: body required: true type: boolean use_autoconfig_1: description: | If set to ``true``, the node is auto configured. in: body required: true type: boolean username: description: | The name of the user for the image. in: body required: true type: string username_1: description: | The user name to log in to an instance operating system for remote operations execution. in: body required: true type: string versions: description: | The list of plugin versions. in: body required: true type: array volume_local_to_instance: description: | If set to ``true``, the volume is local to the instance. in: body required: true type: boolean volume_mount_prefix: description: | The mount point of the node. in: body required: true type: string volume_type: description: | The type of volume in a node. in: body required: true type: string volumes_availability_zone: description: | The availability zone of the volumes. in: body required: true type: string volumes_per_node: description: | The number of volumes for the node. in: body required: true type: integer volumes_size: description: | The size of the volumes in a node. in: body required: true type: integer sahara-12.0.0/api-ref/source/v2/job-types.inc0000664000175000017500000000201213656752032020640 0ustar zuulzuul00000000000000.. -*- rst -*- ========= Job types ========= Each plugin that supports EDP also supports specific job types. Different versions of a plugin might actually support different job types. Configuration options vary by plugin, version, and job type. The job types provide information about which plugins support which job types and how to configure the job types. List job types ============== .. rest_method:: GET /v2/job-types Lists all job types. You can use query parameters to filter the response. Normal response codes: 200 Error response codes: Request ------- .. rest_parameters:: parameters.yaml - plugin: plugin - version: version - type: type - hints: hints Response Parameters ------------------- .. rest_parameters:: parameters.yaml - versions: versions - title: title - description: description_plugin - job_types: job_types - name: plugin_name Response Example ---------------- .. literalinclude:: samples/job-types/job-types-list-response.json :language: javascript sahara-12.0.0/api-ref/source/v2/cluster-templates.inc0000664000175000017500000001134613656752032022413 0ustar zuulzuul00000000000000.. -*- rst -*- ================= Cluster templates ================= A cluster template configures a cluster. A cluster template lists node groups with the number of instances in each group. You can also define cluster-scoped configurations in a cluster template. Show cluster template details ============================= .. rest_method:: GET /v2/cluster-templates/{cluster_template_id} Shows details for a cluster template. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - cluster_template_id: url_cluster_template_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - project_id: project_id - node_groups: node_groups - is_public: object_is_public - plugin_version: plugin_version - id: cluster_template_id - name: cluster_template_name Response Example ---------------- .. literalinclude:: samples/cluster-templates/cluster-templates-list-response.json :language: javascript Update cluster templates ======================== .. rest_method:: PATCH /v2/cluster-templates/{cluster_template_id} Updates a cluster template. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - cluster_template_id: cluster_template_id Request Example --------------- .. literalinclude:: samples/cluster-templates/cluster-template-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - project_id: project_id - node_groups: node_groups - is_public: object_is_public - plugin_version: plugin_version - id: cluster_template_id - name: cluster_template_name Delete cluster template ======================= .. rest_method:: DELETE /v2/cluster-templates/{cluster_template_id} Deletes a cluster template. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - cluster_template_id: cluster_template_id List cluster templates ====================== .. rest_method:: GET /v2/cluster-templates Lists available cluster templates. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_by: sort_by_cluster_templates Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - project_id: project_id - node_groups: node_groups - is_public: object_is_public - plugin_version: plugin_version - id: cluster_template_id - name: cluster_template_name Response Example ---------------- .. rest_method:: GET /v2/cluster-templates?limit=2 .. literalinclude:: samples/cluster-templates/cluster-templates-list-response.json :language: javascript Create cluster templates ======================== .. rest_method:: POST /v2/cluster-templates Creates a cluster template. Normal response codes:202 Request Example --------------- .. literalinclude:: samples/cluster-templates/cluster-template-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - project_id: project_id - node_groups: node_groups - is_public: object_is_public - plugin_version: plugin_version - id: cluster_template_id - name: cluster_template_name sahara-12.0.0/api-ref/source/v2/job-binaries.inc0000664000175000017500000000756513656752032021312 0ustar zuulzuul00000000000000.. -*- rst -*- ============ Job binaries ============ Job binary objects represent data processing applications and libraries that are stored in Object Storage service(S3 or Swift) or in Manila Shares. List job binaries ================= .. rest_method:: GET /v2/job-binaries Lists the available job binaries. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_by: sort_by_job_binary Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - description: job_binary_description - url: url - project_id: project_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - binaries: binaries - id: job_binary_id - name: job_binary_name Response Example ---------------- .. rest_method:: GET /v2/job-binaries?sort_by=created_at .. literalinclude:: samples/job-binaries/list-response.json :language: javascript Create job binary ================= .. rest_method:: POST /v2/job-binaries Creates a job binary. Normal response codes:202 Request Example --------------- .. literalinclude:: samples/job-binaries/create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_binary_description - url: url - project_id: project_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - id: job_binary_id - name: job_binary_name Show job binary details ======================= .. rest_method:: GET /v2/job-binaries/{job_binary_id} Shows details for a job binary. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - job_binary_id: url_job_binary_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_binary_description - url: url - project_id: project_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - id: job_binary_id - name: job_binary_name Response Example ---------------- .. literalinclude:: samples/job-binaries/show-response.json :language: javascript Delete job binary ================= .. rest_method:: DELETE /v2/job-binaries/{job_binary_id} Deletes a job binary. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - job_binary_id: url_job_binary_id Update job binary ================= .. rest_method:: PATCH /v2/job-binaries/{job_binary_id} Updates a job binary. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - job_binary_id: url_job_binary_id Request Example --------------- .. literalinclude:: samples/job-binaries/update-request.json :language: javascript Show job binary data ==================== .. rest_method:: GET /v2/job-binaries/{job_binary_id}/data Shows data for a job binary. The response body shows the job binary raw data and the response headers show the data length. Example response: :: HTTP/1.1 200 OK Connection: keep-alive Content-Length: 161 Content-Type: text/html; charset=utf-8 Date: Sat, 28 Mar 2016 02:42:48 GMT A = load '$INPUT' using PigStorage(':') as (fruit: chararray); B = foreach A generate com.hadoopbook.pig.Trim(fruit); store B into '$OUTPUT' USING PigStorage(); Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - job_binary_id: url_job_binary_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - Content-Length: Content-Length Response Example ---------------- .. literalinclude:: samples/job-binaries/show-data-response :language: text sahara-12.0.0/api-ref/source/v2/clusters.inc0000664000175000017500000001143013656752032020574 0ustar zuulzuul00000000000000.. -*- rst -*- ======== Clusters ======== A cluster is a group of nodes with the same configuration. List available clusters ======================= .. rest_method:: GET /v2/clusters Lists available clusters. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_by: sort_by_clusters Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - clusters: clusters - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Response Example ---------------- .. rest_method:: GET /v2/clusters .. literalinclude:: samples/clusters/clusters-list-response.json :language: javascript Create cluster ============== .. rest_method:: POST /v2/clusters Creates a cluster. Normal response codes: 202 Request Example --------------- .. literalinclude:: samples/clusters/cluster-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Show details of a cluster ========================= .. rest_method:: GET /v2/clusters/{cluster_id} Shows details for a cluster, by ID. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - cluster_id: url_cluster_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Response Example ---------------- .. literalinclude:: samples/clusters/cluster-show-response.json :language: javascript Delete a cluster ================ .. rest_method:: DELETE /v2/clusters/{cluster_id} Deletes a cluster. Normal response codes: 204 or 200 Request ------- .. rest_parameters:: parameters.yaml - cluster_id: url_cluster_id - force: force Scale cluster ============= .. rest_method:: PUT /v2/clusters/{cluster_id} Scales a cluster. Normal response codes: 202 Request ------- .. rest_parameters:: parameters.yaml - cluster_id: cluster_id Request Example --------------- .. literalinclude:: samples/clusters/cluster-scale-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Update cluster ============== .. rest_method:: PATCH /v2/clusters/{cluster_id} Updates a cluster. Normal response codes: 202 Request ------- .. rest_parameters:: parameters.yaml - cluster_id: url_cluster_id Request Example --------------- .. literalinclude:: samples/clusters/cluster-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Show progress ============= .. rest_method:: GET /v2/clusters/{cluster_id} Shows provisioning progress for a cluster. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - cluster_id: url_cluster_id Response Example ---------------- .. literalinclude:: samples/event-log/cluster-progress-response.json :language: javascript sahara-12.0.0/api-ref/source/v2/samples/0000775000175000017500000000000013656752227017710 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/event-log/0000775000175000017500000000000013656752227021610 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/event-log/cluster-progress-response.json0000664000175000017500000000613213656752032027656 0ustar zuulzuul00000000000000{ "status": "Error", "neutron_management_network": "7e31648b-4b2e-4f32-9b0a-113581c27076", "is_transient": false, "description": "", "user_keypair_id": "vgridnev", "updated_at": "2015-03-31 14:10:59", "plugin_name": "spark", "provision_progress": [ { "successful": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-03-31 14:10:20", "step_type": "Engine: create cluster", "updated_at": "2015-03-31 14:10:35", "events": [ { "instance_name": "sample-worker-spark-004", "successful": false, "created_at": "2015-03-31 14:10:35", "updated_at": null, "event_info": "Node sample-worker-spark-004 has error status\nError ID: 3e238c82-d1f5-4560-8ed8-691e923e16a0", "instance_id": "b5ba5ba8-e9c1-47f7-9355-3ce0ec0e449d", "node_group_id": "145cf2fb-dcdf-42af-a4b9-a4047d2919d4", "step_id": "3f243c67-2c27-47c7-a0c0-0834ad17f8b6", "id": "34afcfc7-bdb0-43cb-b142-283d560dc6ad" }, { "instance_name": "sample-worker-spark-001", "successful": true, "created_at": "2015-03-31 14:10:35", "updated_at": null, "event_info": null, "instance_id": "c532ab71-38da-475a-95f8-f8eb93b8f1c2", "node_group_id": "145cf2fb-dcdf-42af-a4b9-a4047d2919d4", "step_id": "3f243c67-2c27-47c7-a0c0-0834ad17f8b6", "id": "4ba50414-5216-4161-bc7a-12716122b99d" } ], "cluster_id": "c26ec982-ba6b-4d75-818c-a50240164af0", "step_name": "Wait for instances to become active", "total": 5, "id": "3f243c67-2c27-47c7-a0c0-0834ad17f8b6" }, { "successful": true, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-03-31 14:10:12", "step_type": "Engine: create cluster", "updated_at": "2015-03-31 14:10:19", "events": [], "cluster_id": "c26ec982-ba6b-4d75-818c-a50240164af0", "step_name": "Run instances", "total": 5, "id": "407ba50a-c799-46af-9dfb-6aa5f6ade426" } ], "anti_affinity": [], "node_groups": [], "management_public_key": "Sahara", "status_description": "Creating cluster failed for the following reason(s): Node sample-worker-spark-004 has error status\nError ID: 3e238c82-d1f5-4560-8ed8-691e923e16a0", "plugin_version": "1.0.0", "id": "c26ec982-ba6b-4d75-1f8c-a50240164af0", "trust_id": null, "info": {}, "cluster_template_id": "5a9a09a3-9349-43bd-9058-16c401fad2d5", "name": "sample", "cluster_configs": {}, "created_at": "2015-03-31 14:10:07", "default_image_id": "e6a6c5da-67be-4017-a7d2-81f466efe67e", "project_id": "9cd1314a0a31493282b6712b76a8fcda" } sahara-12.0.0/api-ref/source/v2/samples/job-types/0000775000175000017500000000000013656752227021624 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/job-types/job-types-list-response.json0000664000175000017500000002117413656752032027237 0ustar zuulzuul00000000000000{ "job_types": [ { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Hive" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Java" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "MapReduce" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "MapReduce.Streaming" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Pig" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Shell" }, { "plugins": [ { "description": "This plugin provides an ability to launch Spark on Hadoop CDH cluster without any management consoles.", "versions": { "1.0.0": {} }, "title": "Apache Spark", "name": "spark" } ], "name": "Spark" } ] } sahara-12.0.0/api-ref/source/v2/samples/cluster-templates/0000775000175000017500000000000013656752227023365 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/cluster-templates/cluster-template-update-request.json0000664000175000017500000000034313656752032032512 0ustar zuulzuul00000000000000{ "description": "Updated template", "plugin_name": "vanilla", "plugin_version": "2.7.1", "name": "vanilla-updated", "cluster_configs": { "HDFS": { "dfs.replication": 2 } } } sahara-12.0.0/api-ref/source/v2/samples/cluster-templates/cluster-template-create-request.json0000664000175000017500000000065313656752032032477 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "plugin_version": "2.7.1", "node_groups": [ { "name": "worker", "count": 3, "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251" }, { "name": "master", "count": 1, "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae" } ], "name": "cluster-template" } sahara-12.0.0/api-ref/source/v2/samples/cluster-templates/cluster-template-update-response.json0000664000175000017500000000437113656752032032665 0ustar zuulzuul00000000000000{ "cluster_template": { "is_public": false, "anti_affinity": [], "name": "vanilla-updated", "created_at": "2015-08-21T08:41:24", "project_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": { "HDFS": { "dfs.replication": 2 } }, "shares": null, "id": "84d47e85-6094-473f-bf6d-5a7e6e86564e", "default_image_id": null, "is_default": false, "updated_at": "2015-09-14T10:45:57", "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": {}, "JobFlow": {}, "MapReduce": {}, "Hive": {}, "Hadoop": {}, "HDFS": {} }, "auto_security_group": true, "availability_zone": "", "count": 1, "flavor_id": "3", "id": "57b966ab-617e-4735-bf60-0cb991208a52", "security_groups": [], "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-08-21T08:41:24", "node_group_template_id": "a5533187-3f14-42c3-ba3a-196c13fe0fb5", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "all", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "datanode", "historyserver", "resourcemanager", "nodemanager", "oozie" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": null, "domain_name": null, "plugin_version": "2.7.1", "use_autoconfig": true, "description": "Updated template", "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/cluster-templates/cluster-template-create-response.json0000664000175000017500000000574613656752032032655 0ustar zuulzuul00000000000000{ "cluster_template": { "is_public": false, "anti_affinity": [], "name": "cluster-template", "created_at": "2015-09-14T10:38:44", "project_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": {}, "shares": null, "id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": null, "is_default": false, "updated_at": null, "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "1751c04e-8f39-467e-a421-480961172d4b", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "3ee85068-c455-4391-9db2-b54a20b99df3", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": null, "domain_name": null, "plugin_version": "2.7.1", "use_autoconfig": true, "description": null, "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/cluster-templates/cluster-templates-list-response.json0000664000175000017500000001267513656752032032547 0ustar zuulzuul00000000000000{ "cluster_templates": [ { "is_public": false, "anti_affinity": [], "name": "cluster-template", "created_at": "2015-09-14T10:38:44", "project_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": {}, "shares": null, "id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": null, "is_default": false, "updated_at": null, "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "1751c04e-8f39-467e-a421-480961172d4b", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "3ee85068-c455-4391-9db2-b54a20b99df3", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "domain_name": null, "plugin_version": "2.7.1", "use_autoconfig": true, "description": null, "is_protected": false }, { "is_public": true, "anti_affinity": [], "name": "asd", "created_at": "2015-08-18T08:39:39", "project_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": { "general": {} }, "shares": null, "id": "5a9c787c-2078-4f7d-9a66-27759be9051b", "default_image_id": null, "is_default": false, "updated_at": "2015-09-14T08:41:15", "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": true, "availability_zone": "", "count": 1, "flavor_id": "2", "id": "a65864dd-3f99-4d29-a011-f7711cc23fa0", "security_groups": [], "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-08-18T08:39:39", "node_group_template_id": "42ce49de-1b8f-41d5-8f4a-244ec0826d92", "updated_at": null, "volumes_per_node": 1, "is_proxy_gateway": false, "name": "asd", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "jobtracker" ], "volumes_size": 10, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": null, "domain_name": null, "plugin_version": "2.7.1", "use_autoconfig": true, "description": "", "is_protected": false } ], "markers": { "prev": null, "next": "2c76e0d3-56cd-4d28-bb4f-4808e538c7b9" } } sahara-12.0.0/api-ref/source/v2/samples/cluster-templates/cluster-template-show-response.json0000664000175000017500000000601013656752032032353 0ustar zuulzuul00000000000000{ "cluster_template": { "is_public": false, "anti_affinity": [], "name": "cluster-template", "created_at": "2015-09-14T10:38:44", "project_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": {}, "shares": null, "id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": null, "is_default": false, "updated_at": null, "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "1751c04e-8f39-467e-a421-480961172d4b", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "3ee85068-c455-4391-9db2-b54a20b99df3", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "domain_name": null, "plugin_version": "2.7.1", "use_autoconfig": true, "description": null, "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/clusters/0000775000175000017500000000000013656752227021554 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/clusters/cluster-create-response.json0000664000175000017500000001307213656752032027222 0ustar zuulzuul00000000000000{ "cluster": { "is_public": false, "project_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": {}, "user_keypair_id": "test", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [], "plugin_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T10:57:12", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Validating" } } sahara-12.0.0/api-ref/source/v2/samples/clusters/cluster-scale-response.json0000664000175000017500000004112113656752032027042 0ustar zuulzuul00000000000000{ "cluster": { "info": { "YARN": { "Web UI": "http://172.18.168.115:8088", "ResourceManager": "http://172.18.168.115:8032" }, "HDFS": { "Web UI": "http://172.18.168.115:50070", "NameNode": "hdfs://vanilla-cluster-master-0:9000" }, "MapReduce JobHistory Server": { "Web UI": "http://172.18.168.115:19888" }, "JobFlow": { "Oozie": "http://172.18.168.115:11000" } }, "plugin_name": "vanilla", "plugin_version": "2.7.1", "updated_at": "2015-09-14T11:01:15", "name": "vanilla-cluster", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "trust_id": null, "status_description": "", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "is_protected": false, "is_transient": false, "provision_progress": [ { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Create Heat stack", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:57:38", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:57:18", "id": "0a6d95f9-30f4-4434-823a-a38a7999a5af" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Configure instances", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:58:22", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:58:16", "id": "29f2b587-c34c-4871-9ed9-9235b411cd9a" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): Oozie", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:01:15", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:27", "id": "36f1efde-90f9-41c1-b409-aa1cf9623e3e" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Configure instances", "step_type": "Plugin: configure cluster", "updated_at": "2015-09-14T10:59:21", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:58:22", "id": "602bcc27-3a2d-42c8-8aca-ebc475319c72" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Configure topology data", "step_type": "Plugin: configure cluster", "updated_at": "2015-09-14T10:59:37", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:59:21", "id": "7e291df1-2d32-410d-ae89-33ab6f83cf17" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 3, "successful": true, "step_name": "Start the following process(es): DataNodes, NodeManagers", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:11", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:01", "id": "8ab7933c-ad61-4a4f-88db-23ce78ee10f6" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Await DataNodes start up", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:21", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:11", "id": "9c8dc016-8c5b-4e80-9857-80c41f6bd971" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): HistoryServer", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:27", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:21", "id": "c6327532-222b-416c-858f-73dbb32b8e97" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Wait for instance accessibility", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:58:14", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:57:41", "id": "d3eca726-8b44-473a-ac29-fba45a893725" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 0, "successful": true, "step_name": "Mount volumes to instances", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:58:15", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:58:14", "id": "d7a875ff-64bf-41aa-882d-b5061c8ee152" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): ResourceManager", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:00", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:59:55", "id": "ded7d227-10b8-4cb0-ab6c-25da1462bb7a" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): NameNode", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T10:59:54", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:59:38", "id": "e1701ff5-930a-4212-945a-43515dfe24d1" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Assign IPs", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:57:41", "project_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:57:38", "id": "eaf0ab1b-bf8f-48f0-8f2c-fa4f82f539b9" } ], "status": "Active", "description": null, "use_autoconfig": true, "shares": null, "domain_name": null, "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "is_public": false, "project_id": "808d5032ea0446889097723bfc8e919d", "node_groups": [ { "volumes_per_node": 0, "volume_type": null, "updated_at": "2015-09-14T10:57:37", "name": "b-worker", "id": "b7a6dea4-c898-446b-8c67-4f378d4c06c4", "node_group_template_id": "bc270ffe-a086-4eeb-9baa-2f5a73504622", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048, "yarn.scheduler.maximum-allocation-mb": 2048 }, "MapReduce": { "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m", "mapreduce.reduce.memory.mb": 512, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "yarn.app.mapreduce.am.resource.mb": 256 } }, "auto_security_group": false, "volumes_availability_zone": null, "use_autoconfig": true, "security_groups": null, "shares": null, "node_processes": [ "datanode", "nodemanager" ], "availability_zone": null, "flavor_id": "2", "image_id": null, "volume_local_to_instance": false, "count": 1, "volumes_size": 0, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "volume_mount_prefix": "/volumes/disk", "instances": [], "is_proxy_gateway": false, "created_at": "2015-09-14T10:57:11" }, { "volumes_per_node": 0, "volume_type": null, "updated_at": "2015-09-14T10:57:36", "name": "master", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048, "yarn.scheduler.maximum-allocation-mb": 2048 }, "MapReduce": { "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m", "mapreduce.reduce.memory.mb": 512, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "yarn.app.mapreduce.am.resource.mb": 256 } }, "auto_security_group": false, "volumes_availability_zone": null, "use_autoconfig": true, "security_groups": null, "shares": null, "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "availability_zone": null, "flavor_id": "2", "image_id": null, "volume_local_to_instance": false, "count": 1, "volumes_size": 0, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "volume_mount_prefix": "/volumes/disk", "instances": [ { "instance_id": "b9f16a07-88fc-423e-83a3-489598fe6737", "internal_ip": "10.50.0.60", "instance_name": "vanilla-cluster-master-0", "updated_at": "2015-09-14T10:57:39", "management_ip": "172.18.168.115", "created_at": "2015-09-14T10:57:36", "id": "4867d92e-cc7b-4cde-9a1a-149e91caa491" } ], "is_proxy_gateway": false, "created_at": "2015-09-14T10:57:11" }, { "volumes_per_node": 0, "volume_type": null, "updated_at": "2015-09-14T10:57:37", "name": "worker", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048, "yarn.scheduler.maximum-allocation-mb": 2048 }, "MapReduce": { "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m", "mapreduce.reduce.memory.mb": 512, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "yarn.app.mapreduce.am.resource.mb": 256 } }, "auto_security_group": false, "volumes_availability_zone": null, "use_autoconfig": true, "security_groups": null, "shares": null, "node_processes": [ "datanode", "nodemanager" ], "availability_zone": null, "flavor_id": "2", "image_id": null, "volume_local_to_instance": false, "count": 4, "volumes_size": 0, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "volume_mount_prefix": "/volumes/disk", "instances": [ { "instance_id": "0cf1ee81-aa72-48da-be2c-65bc2fa51f8f", "internal_ip": "10.50.0.63", "instance_name": "vanilla-cluster-worker-0", "updated_at": "2015-09-14T10:57:39", "management_ip": "172.18.168.118", "created_at": "2015-09-14T10:57:37", "id": "f3633b30-c1e4-4144-930b-ab5b780b87be" }, { "instance_id": "4a937391-b594-4ad0-9a53-00a99a691383", "internal_ip": "10.50.0.62", "instance_name": "vanilla-cluster-worker-1", "updated_at": "2015-09-14T10:57:40", "management_ip": "172.18.168.117", "created_at": "2015-09-14T10:57:37", "id": "0d66fd93-f277-4a94-b46a-f5866aa0c38f" }, { "instance_id": "839b1d56-6d0d-4aa4-9d05-30e029c276f8", "internal_ip": "10.50.0.61", "instance_name": "vanilla-cluster-worker-2", "updated_at": "2015-09-14T10:57:40", "management_ip": "172.18.168.116", "created_at": "2015-09-14T10:57:37", "id": "0982cefd-5c58-436e-8f1e-c1d0830f18a7" } ], "is_proxy_gateway": false, "created_at": "2015-09-14T10:57:11" } ], "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "user_keypair_id": "apavlov", "anti_affinity": [], "created_at": "2015-09-14T10:57:11" } } sahara-12.0.0/api-ref/source/v2/samples/clusters/multiple-clusters-create-request.json0000664000175000017500000000056213656752032031070 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "plugin_version": "2.6.0", "cluster_template_id": "9951f86d-57ba-43d6-9cb0-14ed2ec7a6cf", "default_image_id": "bc3c3d3c-2684-4bf8-a9fa-388fb71288a9", "user_keypair_id": "test", "name": "def-cluster", "count": 2, "cluster_configs": {}, "neutron_management_network": "7e31648b-4b2e-4f32-9b0a-113581c27076" } sahara-12.0.0/api-ref/source/v2/samples/clusters/cluster-update-request.json0000664000175000017500000000010013656752032027057 0ustar zuulzuul00000000000000{ "name": "public-vanilla-cluster", "is_public": true } sahara-12.0.0/api-ref/source/v2/samples/clusters/cluster-scale-request.json0000664000175000017500000000045013656752032026674 0ustar zuulzuul00000000000000{ "add_node_groups": [ { "count": 1, "name": "b-worker", "node_group_template_id": "bc270ffe-a086-4eeb-9baa-2f5a73504622" } ], "resize_node_groups": [ { "count": 4, "name": "worker" } ] } sahara-12.0.0/api-ref/source/v2/samples/clusters/cluster-show-response.json0000664000175000017500000001307213656752032026737 0ustar zuulzuul00000000000000{ "cluster": { "is_public": false, "project_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": {}, "user_keypair_id": "test", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [], "plugin_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T10:57:12", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Validating" } } sahara-12.0.0/api-ref/source/v2/samples/clusters/clusters-list-response.json0000664000175000017500000003756513656752032027132 0ustar zuulzuul00000000000000{ "clusters": [ { "is_public": false, "project_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": { "YARN": { "Web UI": "http://172.18.168.115:8088", "ResourceManager": "http://172.18.168.115:8032" }, "HDFS": { "Web UI": "http://172.18.168.115:50070", "NameNode": "hdfs://vanilla-cluster-master-0:9000" }, "JobFlow": { "Oozie": "http://172.18.168.115:11000" }, "MapReduce JobHistory Server": { "Web UI": "http://172.18.168.115:19888" } }, "user_keypair_id": "apavlov", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [ { "created_at": "2015-09-14T10:57:36", "id": "4867d92e-cc7b-4cde-9a1a-149e91caa491", "management_ip": "172.18.168.115", "updated_at": "2015-09-14T10:57:39", "instance_id": "b9f16a07-88fc-423e-83a3-489598fe6737", "internal_ip": "10.50.0.60", "instance_name": "vanilla-cluster-master-0" } ], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:36", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [ { "created_at": "2015-09-14T10:57:37", "id": "f3633b30-c1e4-4144-930b-ab5b780b87be", "management_ip": "172.18.168.118", "updated_at": "2015-09-14T10:57:39", "instance_id": "0cf1ee81-aa72-48da-be2c-65bc2fa51f8f", "internal_ip": "10.50.0.63", "instance_name": "vanilla-cluster-worker-0" }, { "created_at": "2015-09-14T10:57:37", "id": "0d66fd93-f277-4a94-b46a-f5866aa0c38f", "management_ip": "172.18.168.117", "updated_at": "2015-09-14T10:57:40", "instance_id": "4a937391-b594-4ad0-9a53-00a99a691383", "internal_ip": "10.50.0.62", "instance_name": "vanilla-cluster-worker-1" }, { "created_at": "2015-09-14T10:57:37", "id": "0982cefd-5c58-436e-8f1e-c1d0830f18a7", "management_ip": "172.18.168.116", "updated_at": "2015-09-14T10:57:40", "instance_id": "839b1d56-6d0d-4aa4-9d05-30e029c276f8", "internal_ip": "10.50.0.61", "instance_name": "vanilla-cluster-worker-2" } ], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:37", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [ { "created_at": "2015-09-14T10:57:18", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "0a6d95f9-30f4-4434-823a-a38a7999a5af", "step_type": "Engine: create cluster", "step_name": "Create Heat stack", "updated_at": "2015-09-14T10:57:38", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:58:16", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "29f2b587-c34c-4871-9ed9-9235b411cd9a", "step_type": "Engine: create cluster", "step_name": "Configure instances", "updated_at": "2015-09-14T10:58:22", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:27", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "36f1efde-90f9-41c1-b409-aa1cf9623e3e", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): Oozie", "updated_at": "2015-09-14T11:01:15", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:58:22", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "602bcc27-3a2d-42c8-8aca-ebc475319c72", "step_type": "Plugin: configure cluster", "step_name": "Configure instances", "updated_at": "2015-09-14T10:59:21", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:59:21", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "7e291df1-2d32-410d-ae89-33ab6f83cf17", "step_type": "Plugin: configure cluster", "step_name": "Configure topology data", "updated_at": "2015-09-14T10:59:37", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:01", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "8ab7933c-ad61-4a4f-88db-23ce78ee10f6", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): DataNodes, NodeManagers", "updated_at": "2015-09-14T11:00:11", "successful": true, "total": 3, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:11", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "9c8dc016-8c5b-4e80-9857-80c41f6bd971", "step_type": "Plugin: start cluster", "step_name": "Await DataNodes start up", "updated_at": "2015-09-14T11:00:21", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:21", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "c6327532-222b-416c-858f-73dbb32b8e97", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): HistoryServer", "updated_at": "2015-09-14T11:00:27", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:57:41", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "d3eca726-8b44-473a-ac29-fba45a893725", "step_type": "Engine: create cluster", "step_name": "Wait for instance accessibility", "updated_at": "2015-09-14T10:58:14", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:58:14", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "d7a875ff-64bf-41aa-882d-b5061c8ee152", "step_type": "Engine: create cluster", "step_name": "Mount volumes to instances", "updated_at": "2015-09-14T10:58:15", "successful": true, "total": 0, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:59:55", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "ded7d227-10b8-4cb0-ab6c-25da1462bb7a", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): ResourceManager", "updated_at": "2015-09-14T11:00:00", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:59:38", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "e1701ff5-930a-4212-945a-43515dfe24d1", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): NameNode", "updated_at": "2015-09-14T10:59:54", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:57:38", "project_id": "808d5032ea0446889097723bfc8e919d", "id": "eaf0ab1b-bf8f-48f0-8f2c-fa4f82f539b9", "step_type": "Engine: create cluster", "step_name": "Assign IPs", "updated_at": "2015-09-14T10:57:41", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" } ], "plugin_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T11:01:15", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Active" } ] } sahara-12.0.0/api-ref/source/v2/samples/clusters/cluster-create-request.json0000664000175000017500000000051313656752032027050 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "plugin_version": "2.7.1", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "user_keypair_id": "test", "name": "vanilla-cluster", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd" } sahara-12.0.0/api-ref/source/v2/samples/clusters/cluster-update-response.json0000664000175000017500000001310013656752032027231 0ustar zuulzuul00000000000000{ "cluster": { "is_public": true, "project_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": {}, "user_keypair_id": "test", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [], "plugin_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T10:57:12", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "public-vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Validating" } } sahara-12.0.0/api-ref/source/v2/samples/clusters/multiple-clusters-create-response.json0000664000175000017500000000017313656752032031234 0ustar zuulzuul00000000000000{ "clusters": [ "a007a3e7-658f-4568-b0f2-fe2fd5efc554", "b012a6et-65hf-4566-b0f2-fe3fd7efc567" ] } sahara-12.0.0/api-ref/source/v2/samples/jobs/0000775000175000017500000000000013656752227020645 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/jobs/list-response.json0000664000175000017500000001442313656752032024345 0ustar zuulzuul00000000000000{ "jobs": [ { "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "is_protected": false, "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "created_at": "2015-09-15T09:49:24", "end_time": "2015-09-15T12:50:46", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "is_public": false, "updated_at": "2015-09-15T09:50:46", "return_code": null, "data_source_urls": { "3e1bc8e6-8c69-4749-8e52-90d9341d15bc": "swift://ap-cont/input", "52146b52-6540-4aac-a024-fee253cf52a9": "swift://ap-cont/output" }, "tenant_id": "808d5032ea0446889097723bfc8e919d", "start_time": "2015-09-15T12:49:43", "id": "20da9edb-12ce-4b45-a473-41baeefef997", "oozie_job_id": "0000001-150915094349962-oozie-hado-W", "info": { "user": "hadoop", "actions": [ { "name": ":start:", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": "job-node", "data": null, "endTime": "Tue, 15 Sep 2015 09:49:59 GMT", "errorCode": null, "id": "0000001-150915094349962-oozie-hado-W@:start:", "consoleUrl": "-", "errorMessage": null, "toString": "Action name[:start:] status[OK]", "stats": null, "type": ":START:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "job-node", "trackerUri": "http://172.18.168.119:8032", "externalStatus": "FAILED/KILLED", "status": "ERROR", "externalId": "job_1442310173665_0002", "transition": "fail", "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "JA018", "id": "0000001-150915094349962-oozie-hado-W@job-node", "consoleUrl": "http://ap-cluster-all-0:8088/proxy/application_1442310173665_0002/", "errorMessage": "Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]", "toString": "Action name[job-node] status[ERROR]", "stats": null, "type": "pig", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "fail", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": null, "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "E0729", "id": "0000001-150915094349962-oozie-hado-W@fail", "consoleUrl": "-", "errorMessage": "Workflow failed, error message[Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]]", "toString": "Action name[fail] status[OK]", "stats": null, "type": ":KILL:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:50:17 GMT", "externalChildIDs": null, "cred": "null" } ], "createdTime": "Tue, 15 Sep 2015 09:49:58 GMT", "status": "KILLED", "group": null, "externalId": null, "acl": null, "run": 0, "appName": "job-wf", "parentId": null, "conf": "\r\n \r\n user.name\r\n hadoop\r\n \r\n \r\n oozie.use.system.libpath\r\n true\r\n \r\n \r\n mapreduce.job.user.name\r\n hadoop\r\n \r\n \r\n nameNode\r\n hdfs://ap-cluster-all-0:9000\r\n \r\n \r\n jobTracker\r\n http://172.18.168.119:8032\r\n \r\n \r\n oozie.wf.application.path\r\n hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml\r\n \r\n", "id": "0000001-150915094349962-oozie-hado-W", "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "appPath": "hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml", "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "toString": "Workflow id[0000001-150915094349962-oozie-hado-W] status[KILLED]", "lastModTime": "Tue, 15 Sep 2015 09:50:17 GMT", "consoleUrl": "http://ap-cluster-all-0.novalocal:11000/oozie?job=0000001-150915094349962-oozie-hado-W" } } ] } sahara-12.0.0/api-ref/source/v2/samples/jobs/job-request.json0000664000175000017500000000102713656752032023772 0ustar zuulzuul00000000000000{ "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "job_template_id": "548ea8d4-a5sd-33a4-bt22-asf4n87a8e2dh", "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "job_configs": { "configs": { "mapred.map.tasks": "1", "mapred.reduce.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } } } sahara-12.0.0/api-ref/source/v2/samples/jobs/cancel-response.json0000664000175000017500000001345613656752032024624 0ustar zuulzuul00000000000000{ "job": { "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "is_protected": false, "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "created_at": "2015-09-15T09:49:24", "end_time": "2015-09-15T12:50:46", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "is_public": false, "updated_at": "2015-09-15T09:50:46", "return_code": null, "data_source_urls": { "3e1bc8e6-8c69-4749-8e52-90d9341d15bc": "swift://ap-cont/input", "52146b52-6540-4aac-a024-fee253cf52a9": "swift://ap-cont/output" }, "tenant_id": "808d5032ea0446889097723bfc8e919d", "start_time": "2015-09-15T12:49:43", "id": "20da9edb-12ce-4b45-a473-41baeefef997", "oozie_job_id": "0000001-150915094349962-oozie-hado-W", "info": { "user": "hadoop", "actions": [ { "name": ":start:", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": "job-node", "data": null, "endTime": "Tue, 15 Sep 2015 09:49:59 GMT", "errorCode": null, "id": "0000001-150915094349962-oozie-hado-W@:start:", "consoleUrl": "-", "errorMessage": null, "toString": "Action name[:start:] status[OK]", "stats": null, "type": ":START:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "job-node", "trackerUri": "http://172.18.168.119:8032", "externalStatus": "FAILED/KILLED", "status": "ERROR", "externalId": "job_1442310173665_0002", "transition": "fail", "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "JA018", "id": "0000001-150915094349962-oozie-hado-W@job-node", "consoleUrl": "http://ap-cluster-all-0:8088/proxy/application_1442310173665_0002/", "errorMessage": "Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]", "toString": "Action name[job-node] status[ERROR]", "stats": null, "type": "pig", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "fail", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": null, "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "E0729", "id": "0000001-150915094349962-oozie-hado-W@fail", "consoleUrl": "-", "errorMessage": "Workflow failed, error message[Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]]", "toString": "Action name[fail] status[OK]", "stats": null, "type": ":KILL:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:50:17 GMT", "externalChildIDs": null, "cred": "null" } ], "createdTime": "Tue, 15 Sep 2015 09:49:58 GMT", "status": "KILLED", "group": null, "externalId": null, "acl": null, "run": 0, "appName": "job-wf", "parentId": null, "conf": "\r\n \r\n user.name\r\n hadoop\r\n \r\n \r\n oozie.use.system.libpath\r\n true\r\n \r\n \r\n mapreduce.job.user.name\r\n hadoop\r\n \r\n \r\n nameNode\r\n hdfs://ap-cluster-all-0:9000\r\n \r\n \r\n jobTracker\r\n http://172.18.168.119:8032\r\n \r\n \r\n oozie.wf.application.path\r\n hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml\r\n \r\n", "id": "0000001-150915094349962-oozie-hado-W", "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "appPath": "hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml", "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "toString": "Workflow id[0000001-150915094349962-oozie-hado-W] status[KILLED]", "lastModTime": "Tue, 15 Sep 2015 09:50:17 GMT", "consoleUrl": "http://ap-cluster-all-0.novalocal:11000/oozie?job=0000001-150915094349962-oozie-hado-W" } } } sahara-12.0.0/api-ref/source/v2/samples/jobs/job-update-request.json0000664000175000017500000000003213656752032025245 0ustar zuulzuul00000000000000{ "is_public": true } sahara-12.0.0/api-ref/source/v2/samples/jobs/job-update-response.json0000664000175000017500000001345413656752032025427 0ustar zuulzuul00000000000000{ "job: { "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "is_protected": false, "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "created_at": "2015-09-15T09:49:24", "end_time": "2015-09-15T12:50:46", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "is_public": true, "updated_at": "2015-09-15T09:50:46", "return_code": null, "data_source_urls": { "3e1bc8e6-8c69-4749-8e52-90d9341d15bc": "swift://ap-cont/input", "52146b52-6540-4aac-a024-fee253cf52a9": "swift://ap-cont/output" }, "tenant_id": "808d5032ea0446889097723bfc8e919d", "start_time": "2015-09-15T12:49:43", "id": "20da9edb-12ce-4b45-a473-41baeefef997", "oozie_job_id": "0000001-150915094349962-oozie-hado-W", "info": { "user": "hadoop", "actions": [ { "name": ":start:", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": "job-node", "data": null, "endTime": "Tue, 15 Sep 2015 09:49:59 GMT", "errorCode": null, "id": "0000001-150915094349962-oozie-hado-W@:start:", "consoleUrl": "-", "errorMessage": null, "toString": "Action name[:start:] status[OK]", "stats": null, "type": ":START:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "job-node", "trackerUri": "http://172.18.168.119:8032", "externalStatus": "FAILED/KILLED", "status": "ERROR", "externalId": "job_1442310173665_0002", "transition": "fail", "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "JA018", "id": "0000001-150915094349962-oozie-hado-W@job-node", "consoleUrl": "http://ap-cluster-all-0:8088/proxy/application_1442310173665_0002/", "errorMessage": "Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]", "toString": "Action name[job-node] status[ERROR]", "stats": null, "type": "pig", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "fail", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": null, "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "E0729", "id": "0000001-150915094349962-oozie-hado-W@fail", "consoleUrl": "-", "errorMessage": "Workflow failed, error message[Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]]", "toString": "Action name[fail] status[OK]", "stats": null, "type": ":KILL:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:50:17 GMT", "externalChildIDs": null, "cred": "null" } ], "createdTime": "Tue, 15 Sep 2015 09:49:58 GMT", "status": "KILLED", "group": null, "externalId": null, "acl": null, "run": 0, "appName": "job-wf", "parentId": null, "conf": "\r\n \r\n user.name\r\n hadoop\r\n \r\n \r\n oozie.use.system.libpath\r\n true\r\n \r\n \r\n mapreduce.job.user.name\r\n hadoop\r\n \r\n \r\n nameNode\r\n hdfs://ap-cluster-all-0:9000\r\n \r\n \r\n jobTracker\r\n http://172.18.168.119:8032\r\n \r\n \r\n oozie.wf.application.path\r\n hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml\r\n \r\n", "id": "0000001-150915094349962-oozie-hado-W", "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "appPath": "hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml", "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "toString": "Workflow id[0000001-150915094349962-oozie-hado-W] status[KILLED]", "lastModTime": "Tue, 15 Sep 2015 09:50:17 GMT", "consoleUrl": "http://ap-cluster-all-0.novalocal:11000/oozie?job=0000001-150915094349962-oozie-hado-W" } } } sahara-12.0.0/api-ref/source/v2/samples/jobs/job-response.json0000664000175000017500000000157513656752032024150 0ustar zuulzuul00000000000000{ "job": { "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "is_protected": false, "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "created_at": "2015-09-15T09:49:24", "is_public": false, "id": "20da9edb-12ce-4b45-a473-41baeefef997", "project_id": "808d5032ea0446889097723bfc8e919d", "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "info": { "status": "PENDING" } } } sahara-12.0.0/api-ref/source/v2/samples/plugins/0000775000175000017500000000000013656752227021371 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/plugins/plugin-update-response.json0000664000175000017500000000172213656752032026672 0ustar zuulzuul00000000000000{ "plugin": { "plugin_labels": { "hidden": { "status": true, "mutable": true, "description": "Existence of plugin or its version is hidden, but still can be used for cluster creation by CLI and directly by client." }, "enabled": { "status": false, "mutable": true, "description": "Plugin or its version is enabled and can be used by user." } }, "description": "It's a fake plugin that aimed to work on the CirrOS images. It doesn't install Hadoop. It's needed to be able to test provisioning part of Sahara codebase itself.", "versions": [ "0.1" ], "tenant_id": "993f53c1f51845e48e013aeb632358d8", "title": "Fake Plugin", "version_labels": { "0.1": { "enabled": { "status": true, "mutable": true, "description": "Plugin or its version is enabled and can be used by user." } } }, "name": "fake" } } sahara-12.0.0/api-ref/source/v2/samples/plugins/plugin-show-response.json0000664000175000017500000000060013656752032026362 0ustar zuulzuul00000000000000{ "plugin": { "name": "vanilla", "versions": [ "1.2.1", "2.4.1", "2.6.0" ], "title": "Vanilla Apache Hadoop", "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component." } } sahara-12.0.0/api-ref/source/v2/samples/plugins/plugin-update-request.json0000664000175000017500000000013413656752032026520 0ustar zuulzuul00000000000000{ "plugin_labels": { "enabled": { "status": false } } } sahara-12.0.0/api-ref/source/v2/samples/plugins/plugin-version-show-response.json0000664000175000017500000000552713656752032030062 0ustar zuulzuul00000000000000{ "plugin": { "name": "vanilla", "versions": [ "1.2.1", "2.4.1", "2.6.0" ], "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "required_image_tags": [ "vanilla", "2.6.0" ], "node_processes": { "JobFlow": [ "oozie" ], "HDFS": [ "namenode", "datanode", "secondarynamenode" ], "YARN": [ "resourcemanager", "nodemanager" ], "MapReduce": [ "historyserver" ], "Hadoop": [], "Hive": [ "hiveserver" ] }, "configs": [ { "default_value": "/tmp/hadoop-${user.name}", "name": "hadoop.tmp.dir", "priority": 2, "config_type": "string", "applicable_target": "HDFS", "is_optional": true, "scope": "node", "description": "A base for other temporary directories." }, { "default_value": true, "name": "hadoop.native.lib", "priority": 2, "config_type": "bool", "applicable_target": "HDFS", "is_optional": true, "scope": "node", "description": "Should native hadoop libraries, if present, be used." }, { "default_value": 1024, "name": "NodeManager Heap Size", "config_values": null, "priority": 1, "config_type": "int", "applicable_target": "YARN", "is_optional": false, "scope": "node", "description": null }, { "default_value": true, "name": "Enable Swift", "config_values": null, "priority": 1, "config_type": "bool", "applicable_target": "general", "is_optional": false, "scope": "cluster", "description": null }, { "default_value": true, "name": "Enable MySQL", "config_values": null, "priority": 1, "config_type": "bool", "applicable_target": "general", "is_optional": true, "scope": "cluster", "description": null } ], "title": "Vanilla Apache Hadoop" } } sahara-12.0.0/api-ref/source/v2/samples/plugins/plugins-list-response.json0000664000175000017500000000261713656752032026552 0ustar zuulzuul00000000000000{ "plugins": [ { "name": "vanilla", "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": [ "1.2.1", "2.4.1", "2.6.0" ], "title": "Vanilla Apache Hadoop" }, { "name": "hdp", "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": [ "1.3.2", "2.0.6" ], "title": "Hortonworks Data Platform" }, { "name": "spark", "description": "This plugin provides an ability to launch Spark on Hadoop CDH cluster without any management consoles.", "versions": [ "1.0.0", "0.9.1" ], "title": "Apache Spark" }, { "name": "cdh", "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": [ "5", "5.3.0" ], "title": "Cloudera Plugin" } ] } sahara-12.0.0/api-ref/source/v2/samples/node-group-templates/0000775000175000017500000000000013656752227023763 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-template-update-request.json0000664000175000017500000000033313656752032033505 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "plugin_version": "2.7.1", "node_processes": [ "datanode" ], "name": "new", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "flavor_id": "2" } sahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-template-create-request.json0000664000175000017500000000044313656752032033470 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "plugin_version": "2.7.1", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "name": "master", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "flavor_id": "2" } sahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-template-show-response.json0000664000175000017500000000216713656752032033360 0ustar zuulzuul00000000000000{ "node_group_template": { "is_public": false, "image_id": null, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "description": null, "plugin_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "is_protected": false, "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "security_groups": null, "volume_type": null } } ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000sahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-template-update-response.jsonsahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-template-update-response.jso0000664000175000017500000000167013656752032033502 0ustar zuulzuul00000000000000{ "node_group_template": { "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "is_protected": false, "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "plugin_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "security_groups": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "new", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } } sahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-templates-list-response.json0000664000175000017500000000510013656752032033524 0ustar zuulzuul00000000000000{ "node_group_templates": [ { "is_public": false, "image_id": null, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "description": null, "plugin_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "is_protected": false, "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "security_groups": null, "volume_type": null }, { "is_public": false, "image_id": null, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "flavor_id": "2", "id": "846edb31-add5-46e6-a4ee-a4c339f99251", "description": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:27:00", "is_protected": false, "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "security_groups": null, "volume_type": null } ], "markers": { "prev":"39dfc852-8588-4b61-8d2b-eb08a67ab240", "next":"eaa0bd97-ab54-43df-83ab-77a9774d7358" } } ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000sahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-template-create-response.jsonsahara-12.0.0/api-ref/source/v2/samples/node-group-templates/node-group-template-create-response.jso0000664000175000017500000000201413656752032033454 0ustar zuulzuul00000000000000{ "node_group_template": { "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "is_protected": false, "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "plugin_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "security_groups": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } } sahara-12.0.0/api-ref/source/v2/samples/data-sources/0000775000175000017500000000000013656752227022302 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-source-register-swift-response.json0000664000175000017500000000064213656752032032210 0ustar zuulzuul00000000000000{ "data_source": { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:18:10.691493", "id": "953831f2-0852-49d8-ac71-af5805e25256", "updated_at": null, "name": "swift_input", "description": "This is input", "url": "swift://container/text", "type": "swift" } } sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-source-update-response.json0000664000175000017500000000070013656752032030507 0ustar zuulzuul00000000000000{ "data_source": { "is_public": true, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-09-15 12:32:24.847493", "id": "953831f2-0852-49d8-ac71-af5805e25256", "updated_at": "2015-09-15 12:34:42.597435", "name": "swift_input", "description": "This is public input", "url": "swift://container/text", "type": "swift" } } sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-source-show-response.json0000664000175000017500000000064213656752032030212 0ustar zuulzuul00000000000000{ "data_source": { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:18:10.691493", "id": "953831f2-0852-49d8-ac71-af5805e25256", "updated_at": null, "name": "swift_input", "description": "This is input", "url": "swift://container/text", "type": "swift" } } sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-source-register-hdfs-request.json0000664000175000017500000000022713656752032031631 0ustar zuulzuul00000000000000{ "description": "This is hdfs input", "url": "hdfs://test-master-node:8020/user/hadoop/input", "type": "hdfs", "name": "hdfs_input" } sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-sources-list-response.json0000664000175000017500000000165413656752032030374 0ustar zuulzuul00000000000000{ "data_sources": [ { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:18:10", "id": "953831f2-0852-49d8-ac71-af5805e25256", "name": "swift_input", "updated_at": null, "description": "This is input", "url": "swift://container/text", "type": "swift" }, { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:09:36", "id": "d7fffe9c-3b42-46a9-8be8-e98f586fa7a9", "name": "hdfs_input", "updated_at": null, "description": "This is hdfs input", "url": "hdfs://test-master-node:8020/user/hadoop/input", "type": "hdfs" } ] } sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-source-register-hdfs-response.json0000664000175000017500000000067513656752032032006 0ustar zuulzuul00000000000000{ "data_source": { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:09:36.148464", "id": "d7fffe9c-3b42-46a9-8be8-e98f586fa7a9", "updated_at": null, "name": "hdfs_input", "description": "This is hdfs input", "url": "hdfs://test-master-node:8020/user/hadoop/input", "type": "hdfs" } } sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-source-update-request.json0000664000175000017500000000011013656752032030334 0ustar zuulzuul00000000000000{ "description": "This is public input", "is_protected": true } sahara-12.0.0/api-ref/source/v2/samples/data-sources/data-source-register-swift-request.json0000664000175000017500000000031713656752032032041 0ustar zuulzuul00000000000000{ "description": "This is input", "url": "swift://container/text", "credentials": { "password": "swordfish", "user": "dev" }, "type": "swift", "name": "swift_input" } sahara-12.0.0/api-ref/source/v2/samples/job-binaries/0000775000175000017500000000000013656752227022254 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/job-binaries/show-response.json0000664000175000017500000000063513656752032025761 0ustar zuulzuul00000000000000{ "job_binary": { "is_public": false, "description": "an example jar file", "url": "swift://container/jar-example.jar", "project_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 14:25:04.970513", "updated_at": null, "id": "a716a9cd-9add-4b12-b1b6-cdb71aaef350", "name": "jar-example.jar", "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/job-binaries/list-response.json0000664000175000017500000000244013656752032025750 0ustar zuulzuul00000000000000{ "binaries": [ { "is_public": false, "description": "", "url": "internal-db://d2498cbf-4589-484a-a814-81436c18beb3", "project_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 12:36:59.375060", "updated_at": null, "id": "84248975-3c82-4206-a58d-6e7fb3a563fd", "name": "example.pig", "is_protected": false }, { "is_public": false, "description": "", "url": "internal-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9", "project_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 12:43:52.265899", "updated_at": null, "id": "508fc62d-1d58-4412-b603-bdab307bb926", "name": "udf.jar", "is_protected": false }, { "is_public": false, "description": "", "url": "swift://container/jar-example.jar", "project_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 14:25:04.970513", "updated_at": null, "id": "a716a9cd-9add-4b12-b1b6-cdb71aaef350", "name": "jar-example.jar", "is_protected": false } ] } sahara-12.0.0/api-ref/source/v2/samples/job-binaries/show-data-response0000664000175000017500000000024013656752032025710 0ustar zuulzuul00000000000000A = load '$INPUT' using PigStorage(':') as (fruit: chararray); B = foreach A generate com.hadoopbook.pig.Trim(fruit); store B into '$OUTPUT' USING PigStorage();sahara-12.0.0/api-ref/source/v2/samples/job-binaries/create-response.json0000664000175000017500000000063613656752032026245 0ustar zuulzuul00000000000000{ "job_binary": { "is_public": false, "description": "This is a job binary", "url": "swift://container/jar-example.jar", "project_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 14:49:20.106452", "id": "07f86352-ee8a-4b08-b737-d705ded5ff9c", "updated_at": null, "name": "jar-example.jar", "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/job-binaries/create-request.json0000664000175000017500000000031413656752032026070 0ustar zuulzuul00000000000000{ "url": "swift://container/jar-example.jar", "name": "jar-example.jar", "description": "This is a job binary", "extra": { "password": "swordfish", "user": "admin" } } sahara-12.0.0/api-ref/source/v2/samples/job-binaries/update-response.json0000664000175000017500000000065213656752032026262 0ustar zuulzuul00000000000000{ "job_binary": { "is_public": false, "description": "This is a new job binary", "url": "swift://container/new-jar-example.jar", "project_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2015-09-15 12:42:51.421542", "updated_at": null, "id": "b713d7ad-4add-4f12-g1b6-cdg71aaef350", "name": "new-jar-example.jar", "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/job-binaries/update-request.json0000664000175000017500000000021113656752032026103 0ustar zuulzuul00000000000000{ "url": "swift://container/new-jar-example.jar", "name": "new-jar-example.jar", "description": "This is a new job binary" } sahara-12.0.0/api-ref/source/v2/samples/job-templates/0000775000175000017500000000000013656752227022456 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/job-templates/job-template-update-response.json0000664000175000017500000000155113656752032031044 0ustar zuulzuul00000000000000{ "job_template": { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "1a674c31-9aaa-4d07-b844-2bf200a1b836", "name": "public-pig-job-example", "updated_at": null, "description": "This is public pig job example", "interface": [], "libs": [ { "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "0ff4ac10-94a4-4e25-9ac9-603afe27b100", "name": "binary-job.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-c71e6bce.sahara/binary-job.jar" } ], "type": "MapReduce", "mains": [], "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/job-templates/job-templates-list-response.json0000664000175000017500000000463713656752032030730 0ustar zuulzuul00000000000000{ "job_templates": [ { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "1a674c31-9aaa-4d07-b844-2bf200a1b836", "name": "Edp-test-job-3d60854e", "updated_at": null, "description": "", "interface": [], "libs": [ { "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "0ff4ac10-94a4-4e25-9ac9-603afe27b100", "name": "binary-job-339c2d1a.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-c71e6bce.sahara/binary-job-339c2d1a.jar" } ], "type": "MapReduce", "mains": [], "is_protected": false }, { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:44", "id": "4d1f3759-3497-4927-8352-910bacf24e62", "name": "Edp-test-job-6b6953c8", "updated_at": null, "description": "", "interface": [], "libs": [ { "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:44", "id": "e0d47800-4ac1-4d63-a2e1-c92d669a44e2", "name": "binary-job-6f21a2f8.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-b409ec68.sahara/binary-job-6f21a2f8.jar" } ], "type": "Pig", "mains": [ { "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:44", "id": "e073e896-f123-4b76-995f-901d786262df", "name": "binary-job-d4f8bd75.pig", "updated_at": null, "description": "", "url": "swift://Edp-test-b409ec68.sahara/binary-job-d4f8bd75.pig" } ], "is_protected": false } ], "markers": { "prev": null, "next": "c53832da-6e7b-449e-a166-9f9ce1718d03" } } sahara-12.0.0/api-ref/source/v2/samples/job-templates/job-template-show-response.json0000664000175000017500000000150113656752032030535 0ustar zuulzuul00000000000000{ "job_template": { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "1a674c31-9aaa-4d07-b844-2bf200a1b836", "name": "Edp-test-job", "updated_at": null, "description": "", "interface": [], "libs": [ { "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "0ff4ac10-94a4-4e25-9ac9-603afe27b100", "name": "binary-job.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-c71e6bce.sahara/binary-job.jar" } ], "type": "MapReduce", "mains": [], "is_protected": false } } sahara-12.0.0/api-ref/source/v2/samples/job-templates/job-template-create-response.json0000664000175000017500000000231313656752032031022 0ustar zuulzuul00000000000000{ "job_template": { "is_public": false, "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-03-27 08:48:38.630827", "id": "71defc8f-d005-484f-9d86-1aedf644d1ef", "name": "pig-job-example", "description": "This is pig job example", "interface": [], "libs": [ { "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:53", "id": "320a2ca7-25fd-4b48-9bc3-4fb1b6c4ff27", "name": "binary-job", "updated_at": null, "description": "", "url": "internal-db://c6a925fa-ac1d-4b2e-b88a-7054e1927521" } ], "type": "Pig", "is_protected": false, "mains": [ { "project_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-03 10:47:51", "id": "90d9d5ec-11aa-48bd-bc8c-34936ce0db6e", "name": "pig", "updated_at": null, "description": "", "url": "internal-db://872878f6-72ea-44db-8d1d-e6a6396d2df0" } ] } } sahara-12.0.0/api-ref/source/v2/samples/job-templates/job-template-create-request.json0000664000175000017500000000035413656752032030657 0ustar zuulzuul00000000000000{ "description": "This is pig job example", "mains": [ "90d9d5ec-11aa-48bd-bc8c-34936ce0db6e" ], "libs": [ "320a2ca7-25fd-4b48-9bc3-4fb1b6c4ff27" ], "type": "Pig", "name": "pig-job-example" } sahara-12.0.0/api-ref/source/v2/samples/job-templates/job-template-update-request.json0000664000175000017500000000013613656752032030674 0ustar zuulzuul00000000000000{ "description": "This is public pig job example", "name": "public-pig-job-example" } sahara-12.0.0/api-ref/source/v2/samples/image-registry/0000775000175000017500000000000013656752227022640 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v2/samples/image-registry/image-tags-add-response.json0000664000175000017500000000145713656752032030134 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-03-24T10:18:33Z", "metadata": { "_sahara_tag_vanilla": true, "_sahara_description": "Ubuntu image for Hadoop 2.7.1", "_sahara_username": "ubuntu", "_sahara_tag_some_other_tag": true, "_sahara_tag_2.7.1": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "some_other_tag", "2.7.1" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.6.0-ubuntu-14.04", "description": "Ubuntu image for Hadoop 2.7.1", "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v2/samples/image-registry/image-tags-delete-request.json0000664000175000017500000000006113656752032030466 0ustar zuulzuul00000000000000{ "tags": [ "some_other_tag" ] } sahara-12.0.0/api-ref/source/v2/samples/image-registry/image-tags-add-request.json0000664000175000017500000000012513656752032027755 0ustar zuulzuul00000000000000{ "tags": [ "vanilla", "2.7.1", "some_other_tag" ] } sahara-12.0.0/api-ref/source/v2/samples/image-registry/image-show-response.json0000664000175000017500000000120213656752032027414 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-02-03T10:29:32Z", "metadata": { "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true, "_sahara_tag_2.6.0": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "2.6.0" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.6.0-ubuntu-14.04", "description": null, "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v2/samples/image-registry/image-register-request.json0000664000175000017500000000012113656752032030111 0ustar zuulzuul00000000000000{ "username": "ubuntu", "description": "Ubuntu image for Hadoop 2.7.1" } sahara-12.0.0/api-ref/source/v2/samples/image-registry/images-list-response.json0000664000175000017500000000261013656752032027576 0ustar zuulzuul00000000000000{ "images": [ { "name": "ubuntu-vanilla-2.7.1", "id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "created": "2015-08-06T08:17:14Z", "metadata": { "_sahara_tag_2.7.1": true, "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true }, "username": "ubuntu", "progress": 100, "OS-EXT-IMG-SIZE:size": 998716928, "status": "ACTIVE", "minDisk": 0, "tags": [ "vanilla", "2.7.1" ], "updated": "2015-09-04T09:35:09Z", "minRam": 0, "description": null }, { "name": "cdh-latest", "id": "ff74035b-9da7-4edf-981d-57f270ed337d", "created": "2015-09-04T11:56:44Z", "metadata": { "_sahara_username": "ubuntu", "_sahara_tag_5.4.0": true, "_sahara_tag_cdh": true }, "username": "ubuntu", "progress": 100, "OS-EXT-IMG-SIZE:size": 3281453056, "status": "ACTIVE", "minDisk": 0, "tags": [ "5.4.0", "cdh" ], "updated": "2015-09-04T12:46:42Z", "minRam": 0, "description": null } ] } sahara-12.0.0/api-ref/source/v2/samples/image-registry/image-register-response.json0000664000175000017500000000134113656752032030264 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-03-24T10:05:10Z", "metadata": { "_sahara_description": "Ubuntu image for Hadoop 2.7.1", "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true, "_sahara_tag_2.7.1": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "2.7.1" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.7.1-ubuntu-14.04", "description": "Ubuntu image for Hadoop 2.7.1", "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v2/samples/image-registry/image-tags-delete-response.json0000664000175000017500000000134113656752032030636 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-03-24T10:19:28Z", "metadata": { "_sahara_description": "Ubuntu image for Hadoop 2.7.1", "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true, "_sahara_tag_2.7.1": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "2.7.1" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.7.1-ubuntu-14.04", "description": "Ubuntu image for Hadoop 2.7.1", "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v2/index.rst0000664000175000017500000000065613656752032020106 0ustar zuulzuul00000000000000:tocdepth: 3 ---------------------- Data Processing API v2 ---------------------- .. rest_expand_all:: .. include:: cluster-templates.inc .. include:: clusters.inc .. include:: data-sources.inc .. include:: event-log.inc .. include:: image-registry.inc .. include:: job-binaries.inc .. include:: job-templates.inc .. include:: job-types.inc .. include:: jobs.inc .. include:: node-group-templates.inc .. include:: plugins.inc sahara-12.0.0/api-ref/source/v2/jobs.inc0000664000175000017500000001014113656752032017663 0ustar zuulzuul00000000000000.. -*- rst -*- ==== Jobs ==== A job object represents a job that runs on a cluster. A job polls the status of a running job and reports it to the user. Execute Job =========== .. rest_method:: POST /v2/jobs Executes a job. Normal response codes: 200 Request Example ---------------- .. rest_method:: /v2/jobs .. literalinclude:: samples/jobs/job-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - info: info - output_id: output_id - start_time: start_time - job_template_id: job_template_id - updated_at: updated_at - project_id: project_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_is_public - input_id: input_id - configs: configs - job: job - id: job_id Response Example ---------------- .. literalinclude:: samples/jobs/job-response.json :language: javascript List jobs ========= .. rest_method:: GET /v2/jobs Lists available jobs. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - limit: limit - marker: marker - sort_by: sort_by_job Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - info: info - output_id: output_id - start_time: start_time - job_template_id: job_template_id - updated_at: updated_at - project_id: project_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_is_public - input_id: input_id - configs: configs - job: job - id: job_id - jobs: jobs Response Example ---------------- .. rest_method:: /v2/jobs .. literalinclude:: samples/jobs/list-response.json :language: javascript Show job ======== .. rest_method:: GET /v2/jobs/{job_id} Shows details for a job, by ID. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - job_id: url_job_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - info: info - output_id: output_id - start_time: start_time - job_template_id: job_template_id - updated_at: updated_at - project_id: project_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_is_public - input_id: input_id - configs: configs - job: job - id: job_id Response Example ---------------- .. literalinclude:: samples/jobs/job-response.json :language: javascript Delete job ========== .. rest_method:: DELETE /v2/jobs/{job_id} Deletes a job. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - job_id: url_job_id Update job ========== .. rest_method:: PATCH /v2/jobs/{job_id} Updates a job. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - job_id: url_job_id Request Example --------------- .. literalinclude:: samples/jobs/job-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - info: info - output_id: output_id - start_time: start_time - job_template_id: job_template_id - updated_at: updated_at - project_id: project_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_is_public - input_id: input_id - configs: configs - job: job - id: job_id sahara-12.0.0/api-ref/source/v2/event-log.inc0000664000175000017500000000110713656752032020630 0ustar zuulzuul00000000000000.. -*- rst -*- ========= Event log ========= The event log feature provides information about cluster provisioning. In the event of errors, the event log shows the reason for the failure. Show progress ============= .. rest_method:: GET /v2/clusters/{cluster_id} Shows provisioning progress of cluster. Normal response codes: 200 Error response codes: Request ------- .. rest_parameters:: parameters.yaml - cluster_id: cluster_id Response Example ---------------- .. literalinclude:: samples/event-log/cluster-progress-response.json :language: javascript sahara-12.0.0/api-ref/source/conf.py0000664000175000017500000001473113656752032017214 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # sahara documentation build configuration file, created Fri May 6 15:19:20 # 2016. # # This file is execfile()d with the current directory set to # its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys extensions = [ 'os_api_ref', 'openstackdocstheme' ] # openstackdocstheme options repository_name = 'openstack/sahara' use_storyboard = True html_theme = 'openstackdocs' html_theme_options = { "sidebar_dropdown": "api_ref", "sidebar_mode": "toc", } # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # # source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2010-present, OpenStack Foundation' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # The reST default role (used for this markup: `text`) to use # for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for man page output ---------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'saharaoc' # -- Options for LaTeX output ------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'Sahara.tex', u'OpenStack Data Processing API Documentation', u'OpenStack Foundation', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True sahara-12.0.0/api-ref/source/v1.1/0000775000175000017500000000000013656752227016402 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/node-group-templates.inc0000664000175000017500000001410013656752032023136 0ustar zuulzuul00000000000000.. -*- rst -*- ==================== Node group templates ==================== A cluster is a group of nodes with the same configuration. A node group template configures a node in the cluster. A template configures Hadoop processes and VM characteristics, such as the number of reduced slots for task tracker, the number of CPUs, and the amount of RAM. The template specifies the VM characteristics through an OpenStack flavor. List node group templates ========================= .. rest_method:: GET /v1.1/{project_id}/node-group-templates Lists available node group templates. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_node_group_templates Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - volume_local_to_instance: volume_local_to_instance - availability_zone: availability_zone - updated_at: updated_at - use_autoconfig: use_autoconfig - volumes_per_node: volumes_per_node - id: node_group_template_id - security_groups: security_groups - shares: object_shares - node_configs: node_configs - auto_security_group: auto_security_group - volumes_availability_zone: volumes_availability_zone - description: node_group_template_description - volume_mount_prefix: volume_mount_prefix - plugin_name: plugin_name - floating_ip_pool: floating_ip_pool - is_default: is_default - image_id: image_id - volumes_size: volumes_size - is_proxy_gateway: is_proxy_gateway - is_public: object_is_public - hadoop_version: hadoop_version - name: node_group_template_name - tenant_id: tenant_id - created_at: created_at - volume_type: volume_type - is_protected: object_is_protected - node_processes: node_processes - flavor_id: flavor_id Response Example ---------------- .. rest_method:: GET /v1.1/{project_id}/node-group-templates?limit=2&marker=38b4e146-1d39-4822-bad2-fef1bf304a52&sort_by=name .. literalinclude:: samples/node-group-templates/node-group-templates-list-response.json :language: javascript Create node group template ========================== .. rest_method:: POST /v1.1/{project_id}/node-group-templates Creates a node group template. Normal response codes: 202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id Request Example --------------- .. literalinclude:: samples/node-group-templates/node-group-template-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - volume_local_to_instance: volume_local_to_instance - availability_zone: availability_zone - updated_at: updated_at - use_autoconfig: use_autoconfig - volumes_per_node: volumes_per_node - id: node_group_template_id - security_groups: security_groups - shares: object_shares - node_configs: node_configs - auto_security_group: auto_security_group - volumes_availability_zone: volumes_availability_zone - description: node_group_template_description - volume_mount_prefix: volume_mount_prefix - plugin_name: plugin_name - floating_ip_pool: floating_ip_pool - is_default: is_default - image_id: image_id - volumes_size: volumes_size - is_proxy_gateway: is_proxy_gateway - is_public: object_is_public - hadoop_version: hadoop_version - name: node_group_template_name - tenant_id: tenant_id - created_at: created_at - volume_type: volume_type - is_protected: object_is_protected - node_processes: node_processes - flavor_id: flavor_id Show node group template details ================================ .. rest_method:: GET /v1.1/{project_id}/node-group-templates/{node_group_template_id} Shows a node group template, by ID. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - node_group_template_id: url_node_group_template_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - volume_local_to_instance: volume_local_to_instance - availability_zone: availability_zone - updated_at: updated_at - use_autoconfig: use_autoconfig - volumes_per_node: volumes_per_node - id: node_group_template_id - security_groups: security_groups - shares: object_shares - node_configs: node_configs - auto_security_group: auto_security_group - volumes_availability_zone: volumes_availability_zone - description: node_group_template_description - volume_mount_prefix: volume_mount_prefix - plugin_name: plugin_name - floating_ip_pool: floating_ip_pool - is_default: is_default - image_id: image_id - volumes_size: volumes_size - is_proxy_gateway: is_proxy_gateway - is_public: object_is_public - hadoop_version: hadoop_version - name: node_group_template_name - tenant_id: tenant_id - created_at: created_at - volume_type: volume_type - is_protected: object_is_protected - node_processes: node_processes - flavor_id: flavor_id Response Example ---------------- .. literalinclude:: samples/node-group-templates/node-group-template-show-response.json :language: javascript Delete node group template ========================== .. rest_method:: DELETE /v1.1/{project_id}/node-group-templates/{node_group_template_id} Deletes a node group template. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - node_group_template_id: url_node_group_template_id Update node group template ========================== .. rest_method:: PUT /v1.1/{project_id}/node-group-templates/{node_group_template_id} Updates a node group template. Normal respose codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - node_group_template_id: url_node_group_template_id Request Example --------------- .. literalinclude:: samples/node-group-templates/node-group-template-update-request.json :language: javascript sahara-12.0.0/api-ref/source/v1.1/data-sources.inc0000664000175000017500000000660613656752032021471 0ustar zuulzuul00000000000000.. -*- rst -*- ============ Data sources ============ A data source object defines the location of input or output for MapReduce jobs and might reference different types of storage. The Data Processing service does not validate data source locations. Show data source details ======================== .. rest_method:: GET /v1.1/{project_id}/data-sources/{data_source_id} Shows details for a data source. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - data_source_id: url_data_source_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: data_source_description - url: url - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - type: type - id: data_source_id - name: data_source_name Response Example ---------------- .. literalinclude:: samples/data-sources/data-source-show-response.json :language: javascript Delete data source ================== .. rest_method:: DELETE /v1.1/{project_id}/data-sources/{data_source_id} Deletes a data source. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - data_source_id: url_data_source_id Update data source ================== .. rest_method:: PUT /v1.1/{project_id}/data-sources/{data_source_id} Updates a data source. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - data_source_id: url_data_source_id Request Example --------------- .. literalinclude:: samples/data-sources/data-source-update-request.json :language: javascript List data sources ================= .. rest_method:: GET /v1.1/{project_id}/data-sources Lists all data sources. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_data_sources Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - description: data_source_description - url: url - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - type: type - id: data_source_id - name: data_source_name Response Example ---------------- .. rest_method:: GET /v1.1/{project_id}/data-sourses?sort_by=-name .. literalinclude:: samples/data-sources/data-sources-list-response.json :language: javascript Create data source ================== .. rest_method:: POST /v1.1/{project_id}/data-sources Creates a data source. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id Request Example --------------- .. literalinclude:: samples/data-sources/data-source-register-hdfs-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: data_source_description - url: url - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - type: type - id: data_source_id - name: data_source_name sahara-12.0.0/api-ref/source/v1.1/plugins.inc0000664000175000017500000000543113656752032020553 0ustar zuulzuul00000000000000.. -*- rst -*- ======= Plugins ======= A plugin object defines the Hadoop or Spark version that it can install and which configurations can be set for the cluster. Show plugin details =================== .. rest_method:: GET /v1.1/{project_id}/plugins/{plugin_name} Shows details for a plugin. Normal response codes: 200 Error response codes: 400, 500 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - plugin_name: url_plugin_name Response Parameters ------------------- .. rest_parameters:: parameters.yaml - versions: versions - title: title - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugin-show-response.json :language: javascript List plugins ============ .. rest_method:: GET /v1.1/{project_id}/plugins Lists all registered plugins. Normal response codes: 200 Error response codes: 400, 500 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - title: title - versions: versions - plugins: plugins - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugins-list-response.json :language: javascript Show plugin version details =========================== .. rest_method:: GET /v1.1/{project_id}/plugins/{plugin_name}/{version} Shows details for a plugin version. Normal response codes: 200 Error response codes: 400, 500 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - plugin_name: url_plugin_name - version: version Response Parameters ------------------- .. rest_parameters:: parameters.yaml - versions: versions - title: title - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugin-version-show-response.json :language: javascript Update plugin details ===================== .. rest_method:: PATCH /v1.1/{project_id}/plugins/{plugin_name} Updates details for a plugin. Normal response codes: 202 Error response codes: 400, 500 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - plugin_name: url_plugin_name Request Example --------------- .. literalinclude:: samples/plugins/plugin-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - title: title - versions: versions - description: description_plugin - name: plugin_name Response Example ---------------- .. literalinclude:: samples/plugins/plugin-update-response.json :language: javascript sahara-12.0.0/api-ref/source/v1.1/image-registry.inc0000664000175000017500000000753413656752032022030 0ustar zuulzuul00000000000000.. -*- rst -*- ============== Image registry ============== Use the image registry tool to manage images, add tags to and remove tags from images, and define the user name for an instance operating system. Each plugin lists required tags for an image. To run remote operations, the Data Processing service requires a user name with which to log in to the operating system for an instance. Add tags to image ================= .. rest_method:: POST /v1.1/{project_id}/images/{image_id}/tag Adds tags to an image. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - tags: tags - image_id: url_image_id Request Example --------------- .. literalinclude:: samples/image-registry/image-tags-add-request.json :language: javascript Show image details ================== .. rest_method:: GET /v1.1/{project_id}/images/{image_id} Shows details for an image. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - image_id: url_image_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - status: status - username: username - updated: updated - description: image_description - created: created - image: image - tags: tags - minDisk: minDisk - name: image_name - progress: progress - minRam: minRam - id: image_id - metadata: metadata Response Example ---------------- .. literalinclude:: samples/image-registry/image-show-response.json :language: javascript Register image ============== .. rest_method:: POST /v1.1/{project_id}/images/{image_id} Registers an image in the registry. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - username: username - description: image_description - image_id: url_image_id Request Example --------------- .. literalinclude:: samples/image-registry/image-register-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - status: status - username: username - updated: updated - description: image_description - created: created - image: image - tags: tags - minDisk: minDisk - name: image_name - progress: progress - minRam: minRam - id: image_id - metadata: metadata Unregister image ================ .. rest_method:: DELETE /v1.1/{project_id}/images/{image_id} Removes an image from the registry. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - image_id: url_image_id Remove tags from image ====================== .. rest_method:: POST /v1.1/{project_id}/images/{image_id}/untag Removes tags from an image. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - tags: tags - image_id: url_image_id Request Example --------------- .. literalinclude:: samples/image-registry/image-tags-delete-request.json :language: javascript List images =========== .. rest_method:: GET /v1.1/{project_id}/images Lists all images registered in the registry. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - tags: tags Response Parameters ------------------- .. rest_parameters:: parameters.yaml - status: status - username: username - updated: updated - description: image_description - created: created - image: image - tags: tags - minDisk: minDisk - name: image_name - images: images - progress: progress - minRam: minRam - id: image_id - metadata: metadata Response Example ---------------- .. literalinclude:: samples/image-registry/images-list-response.json :language: javascript sahara-12.0.0/api-ref/source/v1.1/parameters.yaml0000664000175000017500000005661413656752032021437 0ustar zuulzuul00000000000000# variables in header Content-Length: description: | The length of the data, in bytes. in: header required: true type: string # variables in path hints: description: | Includes configuration hints in the response. in: path required: false type: boolean job_binary_id: description: | The UUID of the job binary. in: path required: true type: string limit: description: | Maximum number of objects in response data. in: path required: false type: integer marker: description: | ID of the last element on the list which won't be in response. in: path required: false type: string plugin: description: | Filters the response by a plugin name. in: path required: false type: string sort_by_cluster_templates: description: | The field for sorting cluster templates. this parameter accepts the following values: ``name``, ``plugin_name``, ``hadoop_version``, ``created_at``, ``updated_at``, ``id``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_clusters: description: | The field for sorting clusters. this parameter accepts the following values: ``name``, ``plugin_name``, ``hadoop_version``, ``status``, ``id``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_data_sources: description: | The field for sorting data sources. this parameter accepts the following values: ``id``, ``name``, ``type``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_job_binary: description: | The field for sorting job binaries. this parameter accepts the following values: ``id``, ``name``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_job_binary_internals: description: | The field for sorting job binary internals. this parameter accepts the following values: ``id``, ``name``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_job_execution: description: | The field for sorting job executions. this parameter accepts the following values: ``id``, ``job_template``, ``cluster``, ``status``. Also this values can started with ``-`` prefix for descending sort. For example: ``-cluster``. in: path required: false type: string sort_by_jobs: description: | The field for sorting jobs. this parameter accepts the following values: ``id``, ``name``, ``type``, ``created_at``, ``updated_at``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string sort_by_node_group_templates: description: | The field for sorting node group templates. this parameter accepts the following values: ``name``, ``plugin_name``, ``hadoop_version``, ``created_at``, ``updated_at``, ``id``. Also this values can started with ``-`` prefix for descending sort. For example: ``-name``. in: path required: false type: string type_2: description: | Filters the response by a job type. in: path required: false type: string url_cluster_id: description: | The ID of the cluster in: path required: true type: string url_cluster_template_id: description: | The unique identifier of the cluster template. in: path required: true type: string url_data_source_id: description: | The UUID of the data source. in: path required: true type: string url_image_id: description: | The UUID of the image. in: path required: true type: string url_job_binary_id: description: | The UUID of the job binary. in: path required: true type: string url_job_binary_internals_id: description: | The UUID of the job binary internal. in: path required: true type: string url_job_binary_internals_name: description: | The name of the job binary internal. in: path required: true type: string url_job_execution_id: description: | The UUID of the job execution. in: path required: true type: string url_job_id: description: | The UUID of the job. in: path required: true type: string url_node_group_template_id: description: | The UUID of the node group template. in: path required: true type: string url_plugin_name: description: | Name of the plugin. in: path required: true type: string url_project_id: description: | UUID of the project. in: path required: true type: string version: description: | Filters the response by a plugin version. in: path required: true type: string version_1: description: | Version of the plugin. in: path required: false type: string # variables in body args: description: | The list of arguments. in: body required: true type: array auto_security_group: description: | If set to ``True``, the cluster group is automatically secured. in: body required: true type: boolean availability_zone: description: | The availability of the node in the cluster. in: body required: true type: string binaries: description: | The list of job binary internal objects. in: body required: true type: array cluster_configs: description: | A set of key and value pairs that contain the cluster configuration. in: body required: true type: object cluster_id: description: | The UUID of the cluster. in: body required: true type: string cluster_template_description: description: | Description of the cluster template in: body required: false type: string cluster_template_id: description: | The UUID of the cluster template. in: body required: true type: string cluster_template_name: description: | The name of the cluster template. in: body required: true type: string clusters: description: | The list of clusters. in: body required: true type: array configs: description: | The mappings of the job tasks. in: body required: true type: object count: description: | The number of nodes in the cluster. in: body required: true type: integer created: description: | The date and time when the image was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. in: body required: true type: string created_at: description: | The date and time when the cluster was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string created_at_1: description: | The date and time when the object was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string created_at_2: description: | The date and time when the node was created in the cluster. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string created_at_3: description: | The date and time when the job execution object was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string data_source_description: description: | The description of the data source object. in: body required: true type: string data_source_id: description: | The UUID of the data source. in: body required: true type: string data_source_name: description: | The name of the data source. in: body required: true type: string data_source_urls: description: | The data source URLs. in: body required: true type: object datasize: description: | The size of the data stored in the internal database. in: body required: true type: integer default_image_id: description: | The default ID of the image. in: body required: true type: string description: description: | The description of the cluster. in: body required: true type: string description_3: description: | The description of the node in the cluster. in: body required: true type: string description_7: description: | Description of the image. in: body required: false type: string description_plugin: description: | The full description of the plugin. in: body required: true type: string domain_name: description: | Domain name for internal and external hostname resolution. Required if DNS service is enabled. in: body required: false type: string end_time: description: | The end date and time of the job execution. The date and time when the job completed execution. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string flavor_id: description: | The ID of the flavor. in: body required: true type: string floating_ip_pool: description: | The UUID of the pool in the template. in: body required: true type: string hadoop_version: description: | The version of the Hadoop used in the cluster. in: body required: true type: string hadoop_version_1: description: | The version of the Hadoop. in: body required: true type: string id: description: | The UUID of the cluster. in: body required: true type: string id_1: description: | The ID of the object. in: body required: true type: string image: description: | A set of key and value pairs that contain image properties. in: body required: true type: object image_description: description: | The description of the image. in: body required: true type: string image_id: description: | The UUID of the image. in: body required: true type: string image_name: description: | The name of the operating system image. in: body required: true type: string images: description: | The list of images and their properties. in: body required: true type: array info: description: | A set of key and value pairs that contain cluster information. in: body required: true type: object info_1: description: | The report of the executed job objects. in: body required: true type: object input_id: description: | The UUID of the input. in: body required: true type: string interface: description: | The interfaces of the job object. in: body required: true type: array is_default: description: | If set to ``true``, the cluster is the default cluster. in: body required: true type: boolean is_protected: description: | If set to ``true``, the cluster is protected. in: body required: true type: boolean is_protected_2: description: | If set to ``true``, the node is protected. in: body required: true type: boolean is_protected_3: description: | If set to ``true``, the job execution object is protected. in: body required: true type: boolean is_proxy_gateway: description: | If set to ``true``, the node is the proxy gateway. in: body required: true type: boolean is_public: description: | If set to ``true``, the cluster is public. in: body required: true type: boolean is_transient: description: | If set to ``true``, the cluster is transient. in: body required: true type: boolean job_binary_description: description: | The description of the job binary object. in: body required: true type: string job_binary_internals_id: description: | The UUID of the job binary internal. in: body required: true type: string job_binary_internals_name: description: | The name of the job binary internal. in: body required: true type: string job_binary_name: description: | The name of the object. in: body required: true type: string job_description: description: | The description of the job object. in: body required: true type: string job_execution: description: | A set of key and value pairs that contain the job object. in: body required: true type: object job_execution_id: description: | The UUID of the job execution object. in: body required: true type: string job_execution_is_public: description: | If set to ``true``, the job execution object is public. in: body required: true type: boolean job_executions: description: | The list of job execution objects. in: body required: true type: array job_id: description: | The UUID of the job object. in: body required: true type: string job_name: description: | The name of the job object. in: body required: true type: string job_types: description: | The list of plugins and their job types. in: body required: true type: array jobs: description: | The list of the jobs. in: body required: true type: array libs: description: | The list of the job object properties. in: body required: true type: array mains: description: | The list of the job object and their properties. in: body required: true type: array management_public_key: description: | The SSH key for the management network. in: body required: true type: string markers: description: | The markers of previous and following pages of data. This field exists only if ``limit`` is passed to request. in: body required: false type: object metadata: description: | A set of key and value pairs that contain image metadata. in: body required: true type: object minDisk: description: | The minimum disk space, in GB. in: body required: true type: integer minRam: description: | The minimum amount of random access memory (RAM) for the image, in GB. in: body required: true type: integer name: description: | The name of the cluster. in: body required: true type: string name_1: description: | The name of the object. in: body required: true type: string neutron_management_network: description: | The UUID of the neutron management network. in: body required: true type: string next: description: | The marker of next page of list data. in: body required: false type: string node_configs: description: | A set of key and value pairs that contain the node configuration in the cluster. in: body required: true type: object node_group_template_description: description: | Description of the node group template in: body required: false type: string node_group_template_id: description: | The UUID of the node group template. in: body required: true type: string node_group_template_name: description: | The name of the node group template. in: body required: true type: string node_groups: description: | The detail properties of the node in key-value pairs. in: body required: true type: object node_processes: description: | The list of the processes performed by the node. in: body required: true type: array object_is_protected: description: | If set to ``true``, the object is protected. in: body required: true type: boolean object_is_public: description: | If set to ``true``, the object is public. in: body required: true type: boolean object_shares: description: | The sharing of resources in the cluster. in: body required: true type: string oozie_job_id: description: | The UUID of the ``oozie_job``. in: body required: true type: string output_id: description: | The UUID of the output of job execution object. in: body required: true type: string params: description: | The mappings of values to the parameters. in: body required: true type: object plugin_name: description: | The name of the plugin. in: body required: true type: string plugins: description: | The list of plugins. in: body required: true type: array prev: description: | The marker of previous page. May be ``null`` if previous page is first or if current page is first. in: body required: false type: string progress: description: | A progress indicator, as a percentage value, for the amount of image content that has been processed. in: body required: true type: integer project_id: description: | The UUID of the project. in: body required: true type: string provision_progress: description: | A list of the cluster progresses. in: body required: true type: array return_code: description: | The code returned after job has executed. in: body required: true type: string security_groups: description: | The security groups of the node. in: body required: true type: string shares: description: | The shares of the cluster. in: body required: true type: string start_time: description: | The date and time when the job started. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string status: description: | The status of the cluster. in: body required: true type: string status_1: description: | The current status of the image. in: body required: true type: string status_description: description: | The description of the cluster status. in: body required: true type: string tags: description: | List of tags to add. in: body required: true type: array tags_1: description: | Lists images only with specific tag. Can be used multiple times. in: body required: false type: string tags_2: description: | One or more image tags. in: body required: true type: array tags_3: description: | List of tags to remove. in: body required: true type: array tenant_id: description: | The UUID of the tenant. in: body required: true type: string title: description: | The title of the plugin. in: body required: true type: string trust_id: description: | The id of the trust. in: body required: true type: integer type: description: | The type of the data source object. in: body required: true type: string type_1: description: | The type of the job object. in: body required: true type: string updated: description: | The date and time when the image was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. in: body required: true type: string updated_at: description: | The date and time when the cluster was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string updated_at_1: description: | The date and time when the object was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string updated_at_2: description: | The date and time when the node was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string updated_at_3: description: | The date and time when the job execution object was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string url: description: | The url of the data source object. in: body required: true type: string url_1: description: | The url of the job binary object. in: body required: true type: string use_autoconfig: description: | If set to ``true``, the cluster is auto configured. in: body required: true type: boolean use_autoconfig_1: description: | If set to ``true``, the node is auto configured. in: body required: true type: boolean username: description: | The name of the user for the image. in: body required: true type: string username_1: description: | The user name to log in to an instance operating system for remote operations execution. in: body required: true type: string versions: description: | The list of plugin versions. in: body required: true type: array volume_local_to_instance: description: | If set to ``true``, the volume is local to the instance. in: body required: true type: boolean volume_mount_prefix: description: | The mount point of the node. in: body required: true type: string volume_type: description: | The type of volume in a node. in: body required: true type: string volumes_availability_zone: description: | The availability zone of the volumes. in: body required: true type: string volumes_per_node: description: | The number of volumes for the node. in: body required: true type: integer volumes_size: description: | The size of the volumes in a node. in: body required: true type: integer sahara-12.0.0/api-ref/source/v1.1/job-types.inc0000664000175000017500000000207013656752032021002 0ustar zuulzuul00000000000000.. -*- rst -*- ========= Job types ========= Each plugin that supports EDP also supports specific job types. Different versions of a plugin might actually support different job types. Configuration options vary by plugin, version, and job type. The job types provide information about which plugins support which job types and how to configure the job types. List job types ============== .. rest_method:: GET /v1.1/{project_id}/job-types Lists all job types. You can use query parameters to filter the response. Normal response codes: 200 Error response codes: Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - plugin: plugin - version: version - type: type - hints: hints Response Parameters ------------------- .. rest_parameters:: parameters.yaml - versions: versions - title: title - description: description_plugin - job_types: job_types - name: plugin_name Response Example ---------------- .. literalinclude:: samples/job-types/job-types-list-response.json :language: javascript sahara-12.0.0/api-ref/source/v1.1/cluster-templates.inc0000664000175000017500000001202213656752032022541 0ustar zuulzuul00000000000000.. -*- rst -*- ================= Cluster templates ================= A cluster template configures a Hadoop cluster. A cluster template lists node groups with the number of instances in each group. You can also define cluster-scoped configurations in a cluster template. Show cluster template details ============================= .. rest_method:: GET /v1.1/{project_id}/cluster-templates/{cluster_template_id} Shows details for a cluster template. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_template_id: url_cluster_template_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - tenant_id: tenant_id - node_groups: node_groups - is_public: object_is_public - hadoop_version: hadoop_version - id: cluster_template_id - name: cluster_template_name Response Example ---------------- .. literalinclude:: samples/cluster-templates/cluster-templates-list-response.json :language: javascript Update cluster templates ======================== .. rest_method:: PUT /v1.1/{project_id}/cluster-templates/{cluster_template_id} Updates a cluster template. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_template_id: cluster_template_id Request Example --------------- .. literalinclude:: samples/cluster-templates/cluster-template-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - tenant_id: tenant_id - node_groups: node_groups - is_public: object_is_public - hadoop_version: hadoop_version - id: cluster_template_id - name: cluster_template_name Delete cluster template ======================= .. rest_method:: DELETE /v1.1/{project_id}/cluster-templates/{cluster_template_id} Deletes a cluster template. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_template_id: cluster_template_id List cluster templates ====================== .. rest_method:: GET /v1.1/{project_id}/cluster-templates Lists available cluster templates. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_cluster_templates Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - tenant_id: tenant_id - node_groups: node_groups - is_public: object_is_public - hadoop_version: hadoop_version - id: cluster_template_id - name: cluster_template_name Response Example ---------------- .. rest_method:: GET /v1.1/{project_id}/cluster-templates?limit=2 .. literalinclude:: samples/cluster-templates/cluster-templates-list-response.json :language: javascript Create cluster templates ======================== .. rest_method:: POST /v1.1/{project_id}/cluster-templates Creates a cluster template. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id Request Example --------------- .. literalinclude:: samples/cluster-templates/cluster-template-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: cluster_template_description - use_autoconfig: use_autoconfig - cluster_configs: cluster_configs - created_at: created_at - default_image_id: default_image_id - updated_at: updated_at - plugin_name: plugin_name - is_default: is_default - is_protected: object_is_protected - shares: object_shares - domain_name: domain_name - tenant_id: tenant_id - node_groups: node_groups - is_public: object_is_public - hadoop_version: hadoop_version - id: cluster_template_id - name: cluster_template_name sahara-12.0.0/api-ref/source/v1.1/job-binaries.inc0000664000175000017500000001031713656752032021435 0ustar zuulzuul00000000000000.. -*- rst -*- ============ Job binaries ============ Job binary objects represent data processing applications and libraries that are stored in either the internal database or the Object Storage service. List job binaries ================= .. rest_method:: GET /v1.1/{project_id}/job-binaries Lists the available job binaries. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_job_binary Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - description: job_binary_description - url: url - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - binaries: binaries - id: job_binary_id - name: job_binary_name Response Example ---------------- .. rest_method:: GET /v1.1/{project_id}/job-binaries?sort_by=created_at .. literalinclude:: samples/job-binaries/list-response.json :language: javascript Create job binary ================= .. rest_method:: POST /v1.1/{project_id}/job-binaries Creates a job binary. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id Request Example --------------- .. literalinclude:: samples/job-binaries/create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_binary_description - url: url - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - id: job_binary_id - name: job_binary_name Show job binary details ======================= .. rest_method:: GET /v1.1/{project_id}/job-binaries/{job_binary_id} Shows details for a job binary. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_id: url_job_binary_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_binary_description - url: url - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - id: job_binary_id - name: job_binary_name Response Example ---------------- .. literalinclude:: samples/job-binaries/show-response.json :language: javascript Delete job binary ================= .. rest_method:: DELETE /v1.1/{project_id}/job-binaries/{job_binary_id} Deletes a job binary. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_id: url_job_binary_id Update job binary ================= .. rest_method:: PUT /v1.1/{project_id}/job-binaries/{job_binary_id} Updates a job binary. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_id: url_job_binary_id Request Example --------------- .. literalinclude:: samples/job-binaries/update-request.json :language: javascript Show job binary data ==================== .. rest_method:: GET /v1.1/{project_id}/job-binaries/{job_binary_id}/data Shows data for a job binary. The response body shows the job binary raw data and the response headers show the data length. Example response: :: HTTP/1.1 200 OK Connection: keep-alive Content-Length: 161 Content-Type: text/html; charset=utf-8 Date: Sat, 28 Mar 2016 02:42:48 GMT A = load '$INPUT' using PigStorage(':') as (fruit: chararray); B = foreach A generate com.hadoopbook.pig.Trim(fruit); store B into '$OUTPUT' USING PigStorage(); Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_id: url_job_binary_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - Content-Length: Content-Length Response Example ---------------- .. literalinclude:: samples/job-binaries/show-data-response :language: text sahara-12.0.0/api-ref/source/v1.1/job-binary-internals.inc0000664000175000017500000001126413656752032023124 0ustar zuulzuul00000000000000.. -*- rst -*- ==================== Job binary internals ==================== Job binary internal objects represent data processing applications and libraries that are stored in the internal database. Create job binary internal ========================== .. rest_method:: PUT /v1.1/{project_id}/job-binary-internals/{name} Creates a job binary internal. Job binary internals are objects that represent data processing applications and libraries that are stored in the internal database. Specify the file contents (raw data or script text) in the request body. Specify the file name in the URI. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - name: url_job_binary_internals_name Response Parameters ------------------- .. rest_parameters:: parameters.yaml - name: job_binary_internals_name - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - datasize: datasize - id: job_binary_internals_id Show job binary internal data ============================= .. rest_method:: GET /v1.1/{project_id}/job-binary-internals/{job_binary_internals_id}/data Shows data for a job binary internal. The response body shows the job binary raw data and the response headers show the data length. Example response: :: HTTP/1.1 200 OK Connection: keep-alive Content-Length: 161 Content-Type: text/html; charset=utf-8 Date: Sat, 28 Mar 2016 02:21:13 GMT A = load '$INPUT' using PigStorage(':') as (fruit: chararray); B = foreach A generate com.hadoopbook.pig.Trim(fruit); store B into '$OUTPUT' USING PigStorage(); Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_internals_id: url_job_binary_internals_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - Content-Length: Content-Length Response Example ---------------- .. literalinclude:: samples/job-binary-internals/show-data-response :language: text Show job binary internal details ================================ .. rest_method:: GET /v1.1/{project_id}/job-binary-internals/{job_binary_internals_id} Shows details for a job binary internal. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_internals_id: url_job_binary_internals_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - name: job_binary_internals_name - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - datasize: datasize - id: job_binary_internals_id Response Example ---------------- .. literalinclude:: samples/job-binary-internals/show-response.json :language: javascript Delete job binary internal ========================== .. rest_method:: DELETE /v1.1/{project_id}/job-binary-internals/{job_binary_internals_id} Deletes a job binary internal. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_internals_id: url_job_binary_internals_id Update job binary internal ========================== .. rest_method:: PATCH /v1.1/{project_id}/job-binary-internals/{job_binary_internals_id} Updates a job binary internal. Normal respose codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_binary_internals_id: url_job_binary_internals_id Request Example --------------- .. literalinclude:: samples/job-binary-internals/update-request.json :language: javascript List job binary internals ========================= .. rest_method:: GET /v1.1/{project_id}/job-binary-internals Lists the available job binary internals. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_job_binary_internals Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - binaries: binaries - name: job_binary_internals_name - tenant_id: tenant_id - created_at: created_at - updated_at: updated_at - is_protected: object_is_protected - is_public: object_is_public - datasize: datasize - id: job_binary_internals_id Response Example ---------------- .. rest_method:: GET /v1.1/{project_id}/job-binary-internals .. literalinclude:: samples/job-binary-internals/list-response.json :language: javascript sahara-12.0.0/api-ref/source/v1.1/clusters.inc0000664000175000017500000001303013656752032020730 0ustar zuulzuul00000000000000.. -*- rst -*- ======== Clusters ======== A cluster is a group of nodes with the same configuration. List available clusters ======================= .. rest_method:: GET /v1.1/{project_id}/clusters Lists available clusters. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_clusters Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - clusters: clusters - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Response Example ---------------- .. rest_method:: GET /v1.1/{project_id}/clusters .. literalinclude:: samples/clusters/clusters-list-response.json :language: javascript Create cluster ============== .. rest_method:: POST /v1.1/{project_id}/clusters Creates a cluster. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id Request Example --------------- .. literalinclude:: samples/clusters/cluster-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Create multiple clusters ======================== .. rest_method:: POST /v1.1/{project_id}/clusters/multiple Creates multiple clusters. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id Request Example --------------- .. literalinclude:: samples/clusters/multiple-clusters-create-request.json :language: javascript Show details of a cluster ========================= .. rest_method:: GET /v1.1/{project_id}/clusters/{cluster_id} Shows details for a cluster, by ID. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_id: url_cluster_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Response Example ---------------- .. literalinclude:: samples/clusters/cluster-show-response.json :language: javascript Delete a cluster ================ .. rest_method:: DELETE /v1.1/{project_id}/clusters/{cluster_id} Deletes a cluster. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_id: url_cluster_id Scale cluster ============= .. rest_method:: PUT /v1.1/{project_id}/clusters/{cluster_id} Scales a cluster. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_id: cluster_id Request Example --------------- .. literalinclude:: samples/clusters/cluster-scale-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Update cluster ============== .. rest_method:: PATCH /v1.1/{project_id}/clusters/{cluster_id} Updates a cluster. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_id: url_cluster_id Request Example --------------- .. literalinclude:: samples/clusters/cluster-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - count: count - info: info - cluster_template_id: cluster_template_id - is_transient: is_transient - provision_progress: provision_progress - status: status - neutron_management_network: neutron_management_network - management_public_key: management_public_key - status_description: status_description - trust_id: trust_id - domain_name: domain_name Show progress ============= .. rest_method:: GET /v1.1/{project_id}/clusters/{cluster_id} Shows provisioning progress for a cluster. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_id: url_cluster_id Response Example ---------------- .. literalinclude:: samples/event-log/cluster-progress-response.json :language: javascript sahara-12.0.0/api-ref/source/v1.1/samples/0000775000175000017500000000000013656752227020046 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/event-log/0000775000175000017500000000000013656752227021746 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/event-log/cluster-progress-response.json0000664000175000017500000000612713656752032030020 0ustar zuulzuul00000000000000{ "status": "Error", "neutron_management_network": "7e31648b-4b2e-4f32-9b0a-113581c27076", "is_transient": false, "description": "", "user_keypair_id": "vgridnev", "updated_at": "2015-03-31 14:10:59", "plugin_name": "spark", "provision_progress": [ { "successful": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-03-31 14:10:20", "step_type": "Engine: create cluster", "updated_at": "2015-03-31 14:10:35", "events": [ { "instance_name": "sample-worker-spark-004", "successful": false, "created_at": "2015-03-31 14:10:35", "updated_at": null, "event_info": "Node sample-worker-spark-004 has error status\nError ID: 3e238c82-d1f5-4560-8ed8-691e923e16a0", "instance_id": "b5ba5ba8-e9c1-47f7-9355-3ce0ec0e449d", "node_group_id": "145cf2fb-dcdf-42af-a4b9-a4047d2919d4", "step_id": "3f243c67-2c27-47c7-a0c0-0834ad17f8b6", "id": "34afcfc7-bdb0-43cb-b142-283d560dc6ad" }, { "instance_name": "sample-worker-spark-001", "successful": true, "created_at": "2015-03-31 14:10:35", "updated_at": null, "event_info": null, "instance_id": "c532ab71-38da-475a-95f8-f8eb93b8f1c2", "node_group_id": "145cf2fb-dcdf-42af-a4b9-a4047d2919d4", "step_id": "3f243c67-2c27-47c7-a0c0-0834ad17f8b6", "id": "4ba50414-5216-4161-bc7a-12716122b99d" } ], "cluster_id": "c26ec982-ba6b-4d75-818c-a50240164af0", "step_name": "Wait for instances to become active", "total": 5, "id": "3f243c67-2c27-47c7-a0c0-0834ad17f8b6" }, { "successful": true, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-03-31 14:10:12", "step_type": "Engine: create cluster", "updated_at": "2015-03-31 14:10:19", "events": [], "cluster_id": "c26ec982-ba6b-4d75-818c-a50240164af0", "step_name": "Run instances", "total": 5, "id": "407ba50a-c799-46af-9dfb-6aa5f6ade426" } ], "anti_affinity": [], "node_groups": [], "management_public_key": "Sahara", "status_description": "Creating cluster failed for the following reason(s): Node sample-worker-spark-004 has error status\nError ID: 3e238c82-d1f5-4560-8ed8-691e923e16a0", "hadoop_version": "1.0.0", "id": "c26ec982-ba6b-4d75-1f8c-a50240164af0", "trust_id": null, "info": {}, "cluster_template_id": "5a9a09a3-9349-43bd-9058-16c401fad2d5", "name": "sample", "cluster_configs": {}, "created_at": "2015-03-31 14:10:07", "default_image_id": "e6a6c5da-67be-4017-a7d2-81f466efe67e", "tenant_id": "9cd1314a0a31493282b6712b76a8fcda" } sahara-12.0.0/api-ref/source/v1.1/samples/job-types/0000775000175000017500000000000013656752227021762 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/job-types/job-types-list-response.json0000664000175000017500000002117413656752032027375 0ustar zuulzuul00000000000000{ "job_types": [ { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Hive" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Java" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "MapReduce" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "MapReduce.Streaming" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Pig" }, { "plugins": [ { "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": { "1.2.1": {}, "2.6.0": {} }, "title": "Vanilla Apache Hadoop", "name": "vanilla" }, { "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": { "1.3.2": {}, "2.0.6": {} }, "title": "Hortonworks Data Platform", "name": "hdp" }, { "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": { "5": {}, "5.3.0": {} }, "title": "Cloudera Plugin", "name": "cdh" } ], "name": "Shell" }, { "plugins": [ { "description": "This plugin provides an ability to launch Spark on Hadoop CDH cluster without any management consoles.", "versions": { "1.0.0": {} }, "title": "Apache Spark", "name": "spark" } ], "name": "Spark" } ] } sahara-12.0.0/api-ref/source/v1.1/samples/cluster-templates/0000775000175000017500000000000013656752227023523 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/cluster-templates/cluster-template-update-request.json0000664000175000017500000000034313656752032032650 0ustar zuulzuul00000000000000{ "description": "Updated template", "plugin_name": "vanilla", "hadoop_version": "2.7.1", "name": "vanilla-updated", "cluster_configs": { "HDFS": { "dfs.replication": 2 } } } sahara-12.0.0/api-ref/source/v1.1/samples/cluster-templates/cluster-template-create-request.json0000664000175000017500000000065313656752032032635 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "hadoop_version": "2.7.1", "node_groups": [ { "name": "worker", "count": 3, "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251" }, { "name": "master", "count": 1, "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae" } ], "name": "cluster-template" } sahara-12.0.0/api-ref/source/v1.1/samples/cluster-templates/cluster-template-update-response.json0000664000175000017500000000437013656752032033022 0ustar zuulzuul00000000000000{ "cluster_template": { "is_public": false, "anti_affinity": [], "name": "vanilla-updated", "created_at": "2015-08-21T08:41:24", "tenant_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": { "HDFS": { "dfs.replication": 2 } }, "shares": null, "id": "84d47e85-6094-473f-bf6d-5a7e6e86564e", "default_image_id": null, "is_default": false, "updated_at": "2015-09-14T10:45:57", "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": {}, "JobFlow": {}, "MapReduce": {}, "Hive": {}, "Hadoop": {}, "HDFS": {} }, "auto_security_group": true, "availability_zone": "", "count": 1, "flavor_id": "3", "id": "57b966ab-617e-4735-bf60-0cb991208a52", "security_groups": [], "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-08-21T08:41:24", "node_group_template_id": "a5533187-3f14-42c3-ba3a-196c13fe0fb5", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "all", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "datanode", "historyserver", "resourcemanager", "nodemanager", "oozie" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": null, "domain_name": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "description": "Updated template", "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/cluster-templates/cluster-template-create-response.json0000664000175000017500000000574513656752032033012 0ustar zuulzuul00000000000000{ "cluster_template": { "is_public": false, "anti_affinity": [], "name": "cluster-template", "created_at": "2015-09-14T10:38:44", "tenant_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": {}, "shares": null, "id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": null, "is_default": false, "updated_at": null, "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "1751c04e-8f39-467e-a421-480961172d4b", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "3ee85068-c455-4391-9db2-b54a20b99df3", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": null, "domain_name": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "description": null, "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/cluster-templates/cluster-templates-list-response.json0000664000175000017500000001267313656752032032703 0ustar zuulzuul00000000000000{ "cluster_templates": [ { "is_public": false, "anti_affinity": [], "name": "cluster-template", "created_at": "2015-09-14T10:38:44", "tenant_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": {}, "shares": null, "id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": null, "is_default": false, "updated_at": null, "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "1751c04e-8f39-467e-a421-480961172d4b", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "3ee85068-c455-4391-9db2-b54a20b99df3", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "domain_name": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "description": null, "is_protected": false }, { "is_public": true, "anti_affinity": [], "name": "asd", "created_at": "2015-08-18T08:39:39", "tenant_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": { "general": {} }, "shares": null, "id": "5a9c787c-2078-4f7d-9a66-27759be9051b", "default_image_id": null, "is_default": false, "updated_at": "2015-09-14T08:41:15", "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": true, "availability_zone": "", "count": 1, "flavor_id": "2", "id": "a65864dd-3f99-4d29-a011-f7711cc23fa0", "security_groups": [], "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-08-18T08:39:39", "node_group_template_id": "42ce49de-1b8f-41d5-8f4a-244ec0826d92", "updated_at": null, "volumes_per_node": 1, "is_proxy_gateway": false, "name": "asd", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "jobtracker" ], "volumes_size": 10, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": null, "domain_name": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "description": "", "is_protected": false } ], "markers": { "prev": null, "next": "2c76e0d3-56cd-4d28-bb4f-4808e538c7b9" } } sahara-12.0.0/api-ref/source/v1.1/samples/cluster-templates/cluster-template-show-response.json0000664000175000017500000000600713656752032032517 0ustar zuulzuul00000000000000{ "cluster_template": { "is_public": false, "anti_affinity": [], "name": "cluster-template", "created_at": "2015-09-14T10:38:44", "tenant_id": "808d5032ea0446889097723bfc8e919d", "cluster_configs": {}, "shares": null, "id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": null, "is_default": false, "updated_at": null, "plugin_name": "vanilla", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "1751c04e-8f39-467e-a421-480961172d4b", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "3ee85068-c455-4391-9db2-b54a20b99df3", "security_groups": null, "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:38:44", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "domain_name": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "description": null, "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/job-executions/0000775000175000017500000000000013656752227023004 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/job-executions/list-response.json0000664000175000017500000001443513656752032026507 0ustar zuulzuul00000000000000{ "job_executions": [ { "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "is_protected": false, "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "created_at": "2015-09-15T09:49:24", "end_time": "2015-09-15T12:50:46", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "is_public": false, "updated_at": "2015-09-15T09:50:46", "return_code": null, "data_source_urls": { "3e1bc8e6-8c69-4749-8e52-90d9341d15bc": "swift://ap-cont/input", "52146b52-6540-4aac-a024-fee253cf52a9": "swift://ap-cont/output" }, "tenant_id": "808d5032ea0446889097723bfc8e919d", "start_time": "2015-09-15T12:49:43", "id": "20da9edb-12ce-4b45-a473-41baeefef997", "oozie_job_id": "0000001-150915094349962-oozie-hado-W", "info": { "user": "hadoop", "actions": [ { "name": ":start:", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": "job-node", "data": null, "endTime": "Tue, 15 Sep 2015 09:49:59 GMT", "errorCode": null, "id": "0000001-150915094349962-oozie-hado-W@:start:", "consoleUrl": "-", "errorMessage": null, "toString": "Action name[:start:] status[OK]", "stats": null, "type": ":START:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "job-node", "trackerUri": "http://172.18.168.119:8032", "externalStatus": "FAILED/KILLED", "status": "ERROR", "externalId": "job_1442310173665_0002", "transition": "fail", "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "JA018", "id": "0000001-150915094349962-oozie-hado-W@job-node", "consoleUrl": "http://ap-cluster-all-0:8088/proxy/application_1442310173665_0002/", "errorMessage": "Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]", "toString": "Action name[job-node] status[ERROR]", "stats": null, "type": "pig", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "fail", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": null, "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "E0729", "id": "0000001-150915094349962-oozie-hado-W@fail", "consoleUrl": "-", "errorMessage": "Workflow failed, error message[Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]]", "toString": "Action name[fail] status[OK]", "stats": null, "type": ":KILL:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:50:17 GMT", "externalChildIDs": null, "cred": "null" } ], "createdTime": "Tue, 15 Sep 2015 09:49:58 GMT", "status": "KILLED", "group": null, "externalId": null, "acl": null, "run": 0, "appName": "job-wf", "parentId": null, "conf": "\r\n \r\n user.name\r\n hadoop\r\n \r\n \r\n oozie.use.system.libpath\r\n true\r\n \r\n \r\n mapreduce.job.user.name\r\n hadoop\r\n \r\n \r\n nameNode\r\n hdfs://ap-cluster-all-0:9000\r\n \r\n \r\n jobTracker\r\n http://172.18.168.119:8032\r\n \r\n \r\n oozie.wf.application.path\r\n hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml\r\n \r\n", "id": "0000001-150915094349962-oozie-hado-W", "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "appPath": "hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml", "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "toString": "Workflow id[0000001-150915094349962-oozie-hado-W] status[KILLED]", "lastModTime": "Tue, 15 Sep 2015 09:50:17 GMT", "consoleUrl": "http://ap-cluster-all-0.novalocal:11000/oozie?job=0000001-150915094349962-oozie-hado-W" } } ] } sahara-12.0.0/api-ref/source/v1.1/samples/job-executions/cancel-response.json0000664000175000017500000001347013656752032026757 0ustar zuulzuul00000000000000{ "job_execution": { "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "is_protected": false, "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "created_at": "2015-09-15T09:49:24", "end_time": "2015-09-15T12:50:46", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "is_public": false, "updated_at": "2015-09-15T09:50:46", "return_code": null, "data_source_urls": { "3e1bc8e6-8c69-4749-8e52-90d9341d15bc": "swift://ap-cont/input", "52146b52-6540-4aac-a024-fee253cf52a9": "swift://ap-cont/output" }, "tenant_id": "808d5032ea0446889097723bfc8e919d", "start_time": "2015-09-15T12:49:43", "id": "20da9edb-12ce-4b45-a473-41baeefef997", "oozie_job_id": "0000001-150915094349962-oozie-hado-W", "info": { "user": "hadoop", "actions": [ { "name": ":start:", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": "job-node", "data": null, "endTime": "Tue, 15 Sep 2015 09:49:59 GMT", "errorCode": null, "id": "0000001-150915094349962-oozie-hado-W@:start:", "consoleUrl": "-", "errorMessage": null, "toString": "Action name[:start:] status[OK]", "stats": null, "type": ":START:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "job-node", "trackerUri": "http://172.18.168.119:8032", "externalStatus": "FAILED/KILLED", "status": "ERROR", "externalId": "job_1442310173665_0002", "transition": "fail", "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "JA018", "id": "0000001-150915094349962-oozie-hado-W@job-node", "consoleUrl": "http://ap-cluster-all-0:8088/proxy/application_1442310173665_0002/", "errorMessage": "Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]", "toString": "Action name[job-node] status[ERROR]", "stats": null, "type": "pig", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "fail", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": null, "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "E0729", "id": "0000001-150915094349962-oozie-hado-W@fail", "consoleUrl": "-", "errorMessage": "Workflow failed, error message[Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]]", "toString": "Action name[fail] status[OK]", "stats": null, "type": ":KILL:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:50:17 GMT", "externalChildIDs": null, "cred": "null" } ], "createdTime": "Tue, 15 Sep 2015 09:49:58 GMT", "status": "KILLED", "group": null, "externalId": null, "acl": null, "run": 0, "appName": "job-wf", "parentId": null, "conf": "\r\n \r\n user.name\r\n hadoop\r\n \r\n \r\n oozie.use.system.libpath\r\n true\r\n \r\n \r\n mapreduce.job.user.name\r\n hadoop\r\n \r\n \r\n nameNode\r\n hdfs://ap-cluster-all-0:9000\r\n \r\n \r\n jobTracker\r\n http://172.18.168.119:8032\r\n \r\n \r\n oozie.wf.application.path\r\n hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml\r\n \r\n", "id": "0000001-150915094349962-oozie-hado-W", "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "appPath": "hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml", "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "toString": "Workflow id[0000001-150915094349962-oozie-hado-W] status[KILLED]", "lastModTime": "Tue, 15 Sep 2015 09:50:17 GMT", "consoleUrl": "http://ap-cluster-all-0.novalocal:11000/oozie?job=0000001-150915094349962-oozie-hado-W" } } } sahara-12.0.0/api-ref/source/v1.1/samples/job-executions/job-ex-response.json0000664000175000017500000001347013656752032026716 0ustar zuulzuul00000000000000{ "job_execution": { "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "is_protected": false, "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "created_at": "2015-09-15T09:49:24", "end_time": "2015-09-15T12:50:46", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "is_public": false, "updated_at": "2015-09-15T09:50:46", "return_code": null, "data_source_urls": { "3e1bc8e6-8c69-4749-8e52-90d9341d15bc": "swift://ap-cont/input", "52146b52-6540-4aac-a024-fee253cf52a9": "swift://ap-cont/output" }, "tenant_id": "808d5032ea0446889097723bfc8e919d", "start_time": "2015-09-15T12:49:43", "id": "20da9edb-12ce-4b45-a473-41baeefef997", "oozie_job_id": "0000001-150915094349962-oozie-hado-W", "info": { "user": "hadoop", "actions": [ { "name": ":start:", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": "job-node", "data": null, "endTime": "Tue, 15 Sep 2015 09:49:59 GMT", "errorCode": null, "id": "0000001-150915094349962-oozie-hado-W@:start:", "consoleUrl": "-", "errorMessage": null, "toString": "Action name[:start:] status[OK]", "stats": null, "type": ":START:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "job-node", "trackerUri": "http://172.18.168.119:8032", "externalStatus": "FAILED/KILLED", "status": "ERROR", "externalId": "job_1442310173665_0002", "transition": "fail", "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "JA018", "id": "0000001-150915094349962-oozie-hado-W@job-node", "consoleUrl": "http://ap-cluster-all-0:8088/proxy/application_1442310173665_0002/", "errorMessage": "Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]", "toString": "Action name[job-node] status[ERROR]", "stats": null, "type": "pig", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "fail", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": null, "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "E0729", "id": "0000001-150915094349962-oozie-hado-W@fail", "consoleUrl": "-", "errorMessage": "Workflow failed, error message[Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]]", "toString": "Action name[fail] status[OK]", "stats": null, "type": ":KILL:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:50:17 GMT", "externalChildIDs": null, "cred": "null" } ], "createdTime": "Tue, 15 Sep 2015 09:49:58 GMT", "status": "KILLED", "group": null, "externalId": null, "acl": null, "run": 0, "appName": "job-wf", "parentId": null, "conf": "\r\n \r\n user.name\r\n hadoop\r\n \r\n \r\n oozie.use.system.libpath\r\n true\r\n \r\n \r\n mapreduce.job.user.name\r\n hadoop\r\n \r\n \r\n nameNode\r\n hdfs://ap-cluster-all-0:9000\r\n \r\n \r\n jobTracker\r\n http://172.18.168.119:8032\r\n \r\n \r\n oozie.wf.application.path\r\n hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml\r\n \r\n", "id": "0000001-150915094349962-oozie-hado-W", "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "appPath": "hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml", "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "toString": "Workflow id[0000001-150915094349962-oozie-hado-W] status[KILLED]", "lastModTime": "Tue, 15 Sep 2015 09:50:17 GMT", "consoleUrl": "http://ap-cluster-all-0.novalocal:11000/oozie?job=0000001-150915094349962-oozie-hado-W" } } } sahara-12.0.0/api-ref/source/v1.1/samples/job-executions/job-ex-update-request.json0000664000175000017500000000003213656752032030016 0ustar zuulzuul00000000000000{ "is_public": true } sahara-12.0.0/api-ref/source/v1.1/samples/job-executions/job-ex-update-response.json0000664000175000017500000001346713656752032030204 0ustar zuulzuul00000000000000{ "job_execution": { "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "is_protected": false, "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "created_at": "2015-09-15T09:49:24", "end_time": "2015-09-15T12:50:46", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "is_public": true, "updated_at": "2015-09-15T09:50:46", "return_code": null, "data_source_urls": { "3e1bc8e6-8c69-4749-8e52-90d9341d15bc": "swift://ap-cont/input", "52146b52-6540-4aac-a024-fee253cf52a9": "swift://ap-cont/output" }, "tenant_id": "808d5032ea0446889097723bfc8e919d", "start_time": "2015-09-15T12:49:43", "id": "20da9edb-12ce-4b45-a473-41baeefef997", "oozie_job_id": "0000001-150915094349962-oozie-hado-W", "info": { "user": "hadoop", "actions": [ { "name": ":start:", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": "job-node", "data": null, "endTime": "Tue, 15 Sep 2015 09:49:59 GMT", "errorCode": null, "id": "0000001-150915094349962-oozie-hado-W@:start:", "consoleUrl": "-", "errorMessage": null, "toString": "Action name[:start:] status[OK]", "stats": null, "type": ":START:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "job-node", "trackerUri": "http://172.18.168.119:8032", "externalStatus": "FAILED/KILLED", "status": "ERROR", "externalId": "job_1442310173665_0002", "transition": "fail", "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "JA018", "id": "0000001-150915094349962-oozie-hado-W@job-node", "consoleUrl": "http://ap-cluster-all-0:8088/proxy/application_1442310173665_0002/", "errorMessage": "Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]", "toString": "Action name[job-node] status[ERROR]", "stats": null, "type": "pig", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "externalChildIDs": null, "cred": "null" }, { "name": "fail", "trackerUri": "-", "externalStatus": "OK", "status": "OK", "externalId": "-", "transition": null, "data": null, "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "errorCode": "E0729", "id": "0000001-150915094349962-oozie-hado-W@fail", "consoleUrl": "-", "errorMessage": "Workflow failed, error message[Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2]]", "toString": "Action name[fail] status[OK]", "stats": null, "type": ":KILL:", "retries": 0, "startTime": "Tue, 15 Sep 2015 09:50:17 GMT", "externalChildIDs": null, "cred": "null" } ], "createdTime": "Tue, 15 Sep 2015 09:49:58 GMT", "status": "KILLED", "group": null, "externalId": null, "acl": null, "run": 0, "appName": "job-wf", "parentId": null, "conf": "\r\n \r\n user.name\r\n hadoop\r\n \r\n \r\n oozie.use.system.libpath\r\n true\r\n \r\n \r\n mapreduce.job.user.name\r\n hadoop\r\n \r\n \r\n nameNode\r\n hdfs://ap-cluster-all-0:9000\r\n \r\n \r\n jobTracker\r\n http://172.18.168.119:8032\r\n \r\n \r\n oozie.wf.application.path\r\n hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml\r\n \r\n", "id": "0000001-150915094349962-oozie-hado-W", "startTime": "Tue, 15 Sep 2015 09:49:59 GMT", "appPath": "hdfs://ap-cluster-all-0:9000/user/hadoop/pig-job-example/3038025d-9974-4993-a778-26a074cdfb8d/workflow.xml", "endTime": "Tue, 15 Sep 2015 09:50:17 GMT", "toString": "Workflow id[0000001-150915094349962-oozie-hado-W] status[KILLED]", "lastModTime": "Tue, 15 Sep 2015 09:50:17 GMT", "consoleUrl": "http://ap-cluster-all-0.novalocal:11000/oozie?job=0000001-150915094349962-oozie-hado-W" } } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binary-internals/0000775000175000017500000000000013656752227024077 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/job-binary-internals/show-response.json0000664000175000017500000000052713656752032027604 0ustar zuulzuul00000000000000{ "job_binary_internal": { "is_public": false, "name": "script.pig", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 13:17:35.994466", "updated_at": null, "datasize": 160, "id": "4833dc4b-8682-4d5b-8a9f-2036b47a0996", "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binary-internals/list-response.json0000664000175000017500000000134413656752032027575 0ustar zuulzuul00000000000000{ "binaries": [ { "is_public": false, "name": "example.pig", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 12:36:59.329034", "updated_at": null, "datasize": 161, "id": "d2498cbf-4589-484a-a814-81436c18beb3", "is_protected": false }, { "is_public": false, "name": "udf.jar", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 12:43:52.008620", "updated_at": null, "datasize": 3745, "id": "22f1d87a-23c8-483e-a0dd-cb4a16dde5f9", "is_protected": false } ] } sahara-12.0.0/api-ref/source/v1.1/samples/job-binary-internals/show-data-response0000664000175000017500000000023713656752032027541 0ustar zuulzuul00000000000000A = load '$INPUT' using PigStorage(':') as (fruit: chararray); B = foreach A generate com.hadoopbook.pig.Trim(fruit); store B into '$OUTPUT' USING PigStorage()sahara-12.0.0/api-ref/source/v1.1/samples/job-binary-internals/create-response.json0000664000175000017500000000052713656752032030067 0ustar zuulzuul00000000000000{ "job_binary_internal": { "is_public": false, "name": "script.pig", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 13:17:35.994466", "updated_at": null, "datasize": 160, "id": "4833dc4b-8682-4d5b-8a9f-2036b47a0996", "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binary-internals/update-response.json0000664000175000017500000000055613656752032030110 0ustar zuulzuul00000000000000{ "job_binary_internal": { "is_public": true, "name": "public-jbi", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2015-09-15 13:21:54.485912", "updated_at": "2015-09-15 13:24:24.590124", "datasize": 200, "id": "2433dc4b-8682-4d5b-8a9f-2036d47a0996", "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binary-internals/update-request.json0000664000175000017500000000006413656752032027734 0ustar zuulzuul00000000000000{ "name": "public-jbi", "is_public": true } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/0000775000175000017500000000000013656752227021712 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/clusters/cluster-create-response.json0000664000175000017500000001307113656752032027357 0ustar zuulzuul00000000000000{ "cluster": { "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": {}, "user_keypair_id": "test", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [], "hadoop_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T10:57:12", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Validating" } } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/cluster-scale-response.json0000664000175000017500000004110313656752032027200 0ustar zuulzuul00000000000000{ "cluster": { "info": { "YARN": { "Web UI": "http://172.18.168.115:8088", "ResourceManager": "http://172.18.168.115:8032" }, "HDFS": { "Web UI": "http://172.18.168.115:50070", "NameNode": "hdfs://vanilla-cluster-master-0:9000" }, "MapReduce JobHistory Server": { "Web UI": "http://172.18.168.115:19888" }, "JobFlow": { "Oozie": "http://172.18.168.115:11000" } }, "plugin_name": "vanilla", "hadoop_version": "2.7.1", "updated_at": "2015-09-14T11:01:15", "name": "vanilla-cluster", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "trust_id": null, "status_description": "", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "is_protected": false, "is_transient": false, "provision_progress": [ { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Create Heat stack", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:57:38", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:57:18", "id": "0a6d95f9-30f4-4434-823a-a38a7999a5af" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Configure instances", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:58:22", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:58:16", "id": "29f2b587-c34c-4871-9ed9-9235b411cd9a" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): Oozie", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:01:15", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:27", "id": "36f1efde-90f9-41c1-b409-aa1cf9623e3e" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Configure instances", "step_type": "Plugin: configure cluster", "updated_at": "2015-09-14T10:59:21", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:58:22", "id": "602bcc27-3a2d-42c8-8aca-ebc475319c72" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Configure topology data", "step_type": "Plugin: configure cluster", "updated_at": "2015-09-14T10:59:37", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:59:21", "id": "7e291df1-2d32-410d-ae89-33ab6f83cf17" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 3, "successful": true, "step_name": "Start the following process(es): DataNodes, NodeManagers", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:11", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:01", "id": "8ab7933c-ad61-4a4f-88db-23ce78ee10f6" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Await DataNodes start up", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:21", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:11", "id": "9c8dc016-8c5b-4e80-9857-80c41f6bd971" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): HistoryServer", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:27", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T11:00:21", "id": "c6327532-222b-416c-858f-73dbb32b8e97" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Wait for instance accessibility", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:58:14", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:57:41", "id": "d3eca726-8b44-473a-ac29-fba45a893725" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 0, "successful": true, "step_name": "Mount volumes to instances", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:58:15", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:58:14", "id": "d7a875ff-64bf-41aa-882d-b5061c8ee152" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): ResourceManager", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T11:00:00", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:59:55", "id": "ded7d227-10b8-4cb0-ab6c-25da1462bb7a" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 1, "successful": true, "step_name": "Start the following process(es): NameNode", "step_type": "Plugin: start cluster", "updated_at": "2015-09-14T10:59:54", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:59:38", "id": "e1701ff5-930a-4212-945a-43515dfe24d1" }, { "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42", "total": 4, "successful": true, "step_name": "Assign IPs", "step_type": "Engine: create cluster", "updated_at": "2015-09-14T10:57:41", "tenant_id": "808d5032ea0446889097723bfc8e919d", "created_at": "2015-09-14T10:57:38", "id": "eaf0ab1b-bf8f-48f0-8f2c-fa4f82f539b9" } ], "status": "Active", "description": null, "use_autoconfig": true, "shares": null, "domain_name": null, "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "node_groups": [ { "volumes_per_node": 0, "volume_type": null, "updated_at": "2015-09-14T10:57:37", "name": "b-worker", "id": "b7a6dea4-c898-446b-8c67-4f378d4c06c4", "node_group_template_id": "bc270ffe-a086-4eeb-9baa-2f5a73504622", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048, "yarn.scheduler.maximum-allocation-mb": 2048 }, "MapReduce": { "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m", "mapreduce.reduce.memory.mb": 512, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "yarn.app.mapreduce.am.resource.mb": 256 } }, "auto_security_group": false, "volumes_availability_zone": null, "use_autoconfig": true, "security_groups": null, "shares": null, "node_processes": [ "datanode", "nodemanager" ], "availability_zone": null, "flavor_id": "2", "image_id": null, "volume_local_to_instance": false, "count": 1, "volumes_size": 0, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "volume_mount_prefix": "/volumes/disk", "instances": [], "is_proxy_gateway": false, "created_at": "2015-09-14T10:57:11" }, { "volumes_per_node": 0, "volume_type": null, "updated_at": "2015-09-14T10:57:36", "name": "master", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048, "yarn.scheduler.maximum-allocation-mb": 2048 }, "MapReduce": { "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m", "mapreduce.reduce.memory.mb": 512, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "yarn.app.mapreduce.am.resource.mb": 256 } }, "auto_security_group": false, "volumes_availability_zone": null, "use_autoconfig": true, "security_groups": null, "shares": null, "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "availability_zone": null, "flavor_id": "2", "image_id": null, "volume_local_to_instance": false, "count": 1, "volumes_size": 0, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "volume_mount_prefix": "/volumes/disk", "instances": [ { "instance_id": "b9f16a07-88fc-423e-83a3-489598fe6737", "internal_ip": "10.50.0.60", "instance_name": "vanilla-cluster-master-0", "updated_at": "2015-09-14T10:57:39", "management_ip": "172.18.168.115", "created_at": "2015-09-14T10:57:36", "id": "4867d92e-cc7b-4cde-9a1a-149e91caa491" } ], "is_proxy_gateway": false, "created_at": "2015-09-14T10:57:11" }, { "volumes_per_node": 0, "volume_type": null, "updated_at": "2015-09-14T10:57:37", "name": "worker", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048, "yarn.scheduler.maximum-allocation-mb": 2048 }, "MapReduce": { "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m", "mapreduce.reduce.memory.mb": 512, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "yarn.app.mapreduce.am.resource.mb": 256 } }, "auto_security_group": false, "volumes_availability_zone": null, "use_autoconfig": true, "security_groups": null, "shares": null, "node_processes": [ "datanode", "nodemanager" ], "availability_zone": null, "flavor_id": "2", "image_id": null, "volume_local_to_instance": false, "count": 4, "volumes_size": 0, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "volume_mount_prefix": "/volumes/disk", "instances": [ { "instance_id": "0cf1ee81-aa72-48da-be2c-65bc2fa51f8f", "internal_ip": "10.50.0.63", "instance_name": "vanilla-cluster-worker-0", "updated_at": "2015-09-14T10:57:39", "management_ip": "172.18.168.118", "created_at": "2015-09-14T10:57:37", "id": "f3633b30-c1e4-4144-930b-ab5b780b87be" }, { "instance_id": "4a937391-b594-4ad0-9a53-00a99a691383", "internal_ip": "10.50.0.62", "instance_name": "vanilla-cluster-worker-1", "updated_at": "2015-09-14T10:57:40", "management_ip": "172.18.168.117", "created_at": "2015-09-14T10:57:37", "id": "0d66fd93-f277-4a94-b46a-f5866aa0c38f" }, { "instance_id": "839b1d56-6d0d-4aa4-9d05-30e029c276f8", "internal_ip": "10.50.0.61", "instance_name": "vanilla-cluster-worker-2", "updated_at": "2015-09-14T10:57:40", "management_ip": "172.18.168.116", "created_at": "2015-09-14T10:57:37", "id": "0982cefd-5c58-436e-8f1e-c1d0830f18a7" } ], "is_proxy_gateway": false, "created_at": "2015-09-14T10:57:11" } ], "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "user_keypair_id": "apavlov", "anti_affinity": [], "created_at": "2015-09-14T10:57:11" } } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/multiple-clusters-create-request.json0000664000175000017500000000056213656752032031226 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "hadoop_version": "2.6.0", "cluster_template_id": "9951f86d-57ba-43d6-9cb0-14ed2ec7a6cf", "default_image_id": "bc3c3d3c-2684-4bf8-a9fa-388fb71288a9", "user_keypair_id": "test", "name": "def-cluster", "count": 2, "cluster_configs": {}, "neutron_management_network": "7e31648b-4b2e-4f32-9b0a-113581c27076" } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/cluster-update-request.json0000664000175000017500000000010013656752032027215 0ustar zuulzuul00000000000000{ "name": "public-vanilla-cluster", "is_public": true } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/cluster-scale-request.json0000664000175000017500000000045013656752032027032 0ustar zuulzuul00000000000000{ "add_node_groups": [ { "count": 1, "name": "b-worker", "node_group_template_id": "bc270ffe-a086-4eeb-9baa-2f5a73504622" } ], "resize_node_groups": [ { "count": 4, "name": "worker" } ] } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/cluster-show-response.json0000664000175000017500000001307113656752032027074 0ustar zuulzuul00000000000000{ "cluster": { "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": {}, "user_keypair_id": "test", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [], "hadoop_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T10:57:12", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Validating" } } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/clusters-list-response.json0000664000175000017500000003754713656752032027270 0ustar zuulzuul00000000000000{ "clusters": [ { "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": { "YARN": { "Web UI": "http://172.18.168.115:8088", "ResourceManager": "http://172.18.168.115:8032" }, "HDFS": { "Web UI": "http://172.18.168.115:50070", "NameNode": "hdfs://vanilla-cluster-master-0:9000" }, "JobFlow": { "Oozie": "http://172.18.168.115:11000" }, "MapReduce JobHistory Server": { "Web UI": "http://172.18.168.115:19888" } }, "user_keypair_id": "apavlov", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [ { "created_at": "2015-09-14T10:57:36", "id": "4867d92e-cc7b-4cde-9a1a-149e91caa491", "management_ip": "172.18.168.115", "updated_at": "2015-09-14T10:57:39", "instance_id": "b9f16a07-88fc-423e-83a3-489598fe6737", "internal_ip": "10.50.0.60", "instance_name": "vanilla-cluster-master-0" } ], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:36", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [ { "created_at": "2015-09-14T10:57:37", "id": "f3633b30-c1e4-4144-930b-ab5b780b87be", "management_ip": "172.18.168.118", "updated_at": "2015-09-14T10:57:39", "instance_id": "0cf1ee81-aa72-48da-be2c-65bc2fa51f8f", "internal_ip": "10.50.0.63", "instance_name": "vanilla-cluster-worker-0" }, { "created_at": "2015-09-14T10:57:37", "id": "0d66fd93-f277-4a94-b46a-f5866aa0c38f", "management_ip": "172.18.168.117", "updated_at": "2015-09-14T10:57:40", "instance_id": "4a937391-b594-4ad0-9a53-00a99a691383", "internal_ip": "10.50.0.62", "instance_name": "vanilla-cluster-worker-1" }, { "created_at": "2015-09-14T10:57:37", "id": "0982cefd-5c58-436e-8f1e-c1d0830f18a7", "management_ip": "172.18.168.116", "updated_at": "2015-09-14T10:57:40", "instance_id": "839b1d56-6d0d-4aa4-9d05-30e029c276f8", "internal_ip": "10.50.0.61", "instance_name": "vanilla-cluster-worker-2" } ], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:37", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [ { "created_at": "2015-09-14T10:57:18", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "0a6d95f9-30f4-4434-823a-a38a7999a5af", "step_type": "Engine: create cluster", "step_name": "Create Heat stack", "updated_at": "2015-09-14T10:57:38", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:58:16", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "29f2b587-c34c-4871-9ed9-9235b411cd9a", "step_type": "Engine: create cluster", "step_name": "Configure instances", "updated_at": "2015-09-14T10:58:22", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:27", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "36f1efde-90f9-41c1-b409-aa1cf9623e3e", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): Oozie", "updated_at": "2015-09-14T11:01:15", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:58:22", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "602bcc27-3a2d-42c8-8aca-ebc475319c72", "step_type": "Plugin: configure cluster", "step_name": "Configure instances", "updated_at": "2015-09-14T10:59:21", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:59:21", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "7e291df1-2d32-410d-ae89-33ab6f83cf17", "step_type": "Plugin: configure cluster", "step_name": "Configure topology data", "updated_at": "2015-09-14T10:59:37", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:01", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "8ab7933c-ad61-4a4f-88db-23ce78ee10f6", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): DataNodes, NodeManagers", "updated_at": "2015-09-14T11:00:11", "successful": true, "total": 3, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:11", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "9c8dc016-8c5b-4e80-9857-80c41f6bd971", "step_type": "Plugin: start cluster", "step_name": "Await DataNodes start up", "updated_at": "2015-09-14T11:00:21", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T11:00:21", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "c6327532-222b-416c-858f-73dbb32b8e97", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): HistoryServer", "updated_at": "2015-09-14T11:00:27", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:57:41", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "d3eca726-8b44-473a-ac29-fba45a893725", "step_type": "Engine: create cluster", "step_name": "Wait for instance accessibility", "updated_at": "2015-09-14T10:58:14", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:58:14", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "d7a875ff-64bf-41aa-882d-b5061c8ee152", "step_type": "Engine: create cluster", "step_name": "Mount volumes to instances", "updated_at": "2015-09-14T10:58:15", "successful": true, "total": 0, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:59:55", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "ded7d227-10b8-4cb0-ab6c-25da1462bb7a", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): ResourceManager", "updated_at": "2015-09-14T11:00:00", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:59:38", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "e1701ff5-930a-4212-945a-43515dfe24d1", "step_type": "Plugin: start cluster", "step_name": "Start the following process(es): NameNode", "updated_at": "2015-09-14T10:59:54", "successful": true, "total": 1, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" }, { "created_at": "2015-09-14T10:57:38", "tenant_id": "808d5032ea0446889097723bfc8e919d", "id": "eaf0ab1b-bf8f-48f0-8f2c-fa4f82f539b9", "step_type": "Engine: create cluster", "step_name": "Assign IPs", "updated_at": "2015-09-14T10:57:41", "successful": true, "total": 4, "cluster_id": "e172d86c-906d-418e-a29c-6189f53bfa42" } ], "hadoop_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T11:01:15", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Active" } ] } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/cluster-create-request.json0000664000175000017500000000051313656752032027206 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "hadoop_version": "2.7.1", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "user_keypair_id": "test", "name": "vanilla-cluster", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd" } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/cluster-update-response.json0000664000175000017500000001307713656752032027404 0ustar zuulzuul00000000000000{ "cluster": { "is_public": true, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "domain_name": null, "status_description": "", "plugin_name": "vanilla", "neutron_management_network": "b1610452-2933-46b0-bf31-660cfa5621bd", "info": {}, "user_keypair_id": "test", "management_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfe9ARO+t9CybtuC1+cusDTeQL7wos1+U2dKPlCUJvNUn0PcunGefqWI4MUZPY9yGmvRqfINy7/xRQCzL0AwgqzwcCXamcK8JCC80uH7j8Vxa4kJheG1jxMoz/FpDSdRnzNZ+m7H5rjOwAQANhL7KatGLyCPQg9fqOoaIyCZE/A3fztm/XjJMpWnuANpUZubZtISEfu4UZKVk/DPSlBrbTZkTOvEog1LwZCZoTt0rq6a7PJFzJJkq0YecRudu/f3tpXbNe/F84sd9PhOSqcrRbm72WzglyEE8PuS1kuWpEz8G+Y5/0tQxnoh6khj9mgflrdCFuvpdutFLH4eN5MFDh Generated-by-Sahara\n", "id": "e172d86c-906d-418e-a29c-6189f53bfa42", "cluster_template_id": "57c92a7c-5c6a-42ea-9c6f-9f40a5aa4b36", "node_groups": [ { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 1, "flavor_id": "2", "id": "0fe07f2a-0275-4bc0-93b2-c3c1e48e2815", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null }, { "image_id": null, "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": { "YARN": { "yarn.nodemanager.vmem-check-enabled": "false", "yarn.scheduler.maximum-allocation-mb": 2048, "yarn.scheduler.minimum-allocation-mb": 256, "yarn.nodemanager.resource.memory-mb": 2048 }, "MapReduce": { "yarn.app.mapreduce.am.resource.mb": 256, "mapreduce.task.io.sort.mb": 102, "mapreduce.reduce.java.opts": "-Xmx409m", "mapreduce.reduce.memory.mb": 512, "mapreduce.map.memory.mb": 256, "yarn.app.mapreduce.am.command-opts": "-Xmx204m", "mapreduce.map.java.opts": "-Xmx204m" } }, "auto_security_group": false, "availability_zone": null, "count": 3, "flavor_id": "2", "id": "c7a3bea4-c898-446b-8c67-6d378d4c06c4", "security_groups": null, "use_autoconfig": true, "instances": [], "volumes_availability_zone": null, "created_at": "2015-09-14T10:57:11", "node_group_template_id": "846edb31-add5-46e6-a4ee-a4c339f99251", "updated_at": "2015-09-14T10:57:12", "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } ], "provision_progress": [], "hadoop_version": "2.7.1", "use_autoconfig": true, "trust_id": null, "description": null, "created_at": "2015-09-14T10:57:11", "is_protected": false, "updated_at": "2015-09-14T10:57:12", "is_transient": false, "cluster_configs": { "HDFS": { "dfs.replication": 3 } }, "anti_affinity": [], "name": "public-vanilla-cluster", "default_image_id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "status": "Validating" } } sahara-12.0.0/api-ref/source/v1.1/samples/clusters/multiple-clusters-create-response.json0000664000175000017500000000017313656752032031372 0ustar zuulzuul00000000000000{ "clusters": [ "a007a3e7-658f-4568-b0f2-fe2fd5efc554", "b012a6et-65hf-4566-b0f2-fe3fd7efc567" ] } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/0000775000175000017500000000000013656752227021003 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/jobs/job-execute-request.json0000664000175000017500000000072713656752032025576 0ustar zuulzuul00000000000000{ "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "job_configs": { "configs": { "mapred.map.tasks": "1", "mapred.reduce.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } } } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/job-execute-response.json0000664000175000017500000000160613656752032025741 0ustar zuulzuul00000000000000{ "job_execution": { "input_id": "3e1bc8e6-8c69-4749-8e52-90d9341d15bc", "is_protected": false, "job_id": "310b0fc6-e1db-408e-8798-312e7500f3ac", "cluster_id": "811e1134-666f-4c48-bc92-afb5b10c9d8c", "output_id": "52146b52-6540-4aac-a024-fee253cf52a9", "created_at": "2015-09-15T09:49:24", "is_public": false, "id": "20da9edb-12ce-4b45-a473-41baeefef997", "tenant_id": "808d5032ea0446889097723bfc8e919d", "job_configs": { "configs": { "mapred.reduce.tasks": "1", "mapred.map.tasks": "1" }, "args": [ "arg1", "arg2" ], "params": { "param2": "value2", "param1": "value1" } }, "info": { "status": "PENDING" } } } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/jobs-list-response.json0000664000175000017500000000462113656752032025435 0ustar zuulzuul00000000000000{ "jobs": [ { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "1a674c31-9aaa-4d07-b844-2bf200a1b836", "name": "Edp-test-job-3d60854e", "updated_at": null, "description": "", "interface": [], "libs": [ { "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "0ff4ac10-94a4-4e25-9ac9-603afe27b100", "name": "binary-job-339c2d1a.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-c71e6bce.sahara/binary-job-339c2d1a.jar" } ], "type": "MapReduce", "mains": [], "is_protected": false }, { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:44", "id": "4d1f3759-3497-4927-8352-910bacf24e62", "name": "Edp-test-job-6b6953c8", "updated_at": null, "description": "", "interface": [], "libs": [ { "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:44", "id": "e0d47800-4ac1-4d63-a2e1-c92d669a44e2", "name": "binary-job-6f21a2f8.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-b409ec68.sahara/binary-job-6f21a2f8.jar" } ], "type": "Pig", "mains": [ { "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:44", "id": "e073e896-f123-4b76-995f-901d786262df", "name": "binary-job-d4f8bd75.pig", "updated_at": null, "description": "", "url": "swift://Edp-test-b409ec68.sahara/binary-job-d4f8bd75.pig" } ], "is_protected": false } ], "markers": { "prev": null, "next": "c53832da-6e7b-449e-a166-9f9ce1718d03" } } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/job-update-request.json0000664000175000017500000000013613656752032025410 0ustar zuulzuul00000000000000{ "description": "This is public pig job example", "name": "public-pig-job-example" } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/job-create-request.json0000664000175000017500000000035413656752032025373 0ustar zuulzuul00000000000000{ "description": "This is pig job example", "mains": [ "90d9d5ec-11aa-48bd-bc8c-34936ce0db6e" ], "libs": [ "320a2ca7-25fd-4b48-9bc3-4fb1b6c4ff27" ], "type": "Pig", "name": "pig-job-example" } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/job-update-response.json0000664000175000017500000000153613656752032025563 0ustar zuulzuul00000000000000{ "job": { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "1a674c31-9aaa-4d07-b844-2bf200a1b836", "name": "public-pig-job-example", "updated_at": null, "description": "This is public pig job example", "interface": [], "libs": [ { "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "0ff4ac10-94a4-4e25-9ac9-603afe27b100", "name": "binary-job.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-c71e6bce.sahara/binary-job.jar" } ], "type": "MapReduce", "mains": [], "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/job-create-response.json0000664000175000017500000000227713656752032025547 0ustar zuulzuul00000000000000{ "job": { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-03-27 08:48:38.630827", "id": "71defc8f-d005-484f-9d86-1aedf644d1ef", "name": "pig-job-example", "description": "This is pig job example", "interface": [], "libs": [ { "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:53", "id": "320a2ca7-25fd-4b48-9bc3-4fb1b6c4ff27", "name": "binary-job", "updated_at": null, "description": "", "url": "internal-db://c6a925fa-ac1d-4b2e-b88a-7054e1927521" } ], "type": "Pig", "is_protected": false, "mains": [ { "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-03 10:47:51", "id": "90d9d5ec-11aa-48bd-bc8c-34936ce0db6e", "name": "pig", "updated_at": null, "description": "", "url": "internal-db://872878f6-72ea-44db-8d1d-e6a6396d2df0" } ] } } sahara-12.0.0/api-ref/source/v1.1/samples/jobs/job-show-response.json0000664000175000017500000000146613656752032025263 0ustar zuulzuul00000000000000{ "job": { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "1a674c31-9aaa-4d07-b844-2bf200a1b836", "name": "Edp-test-job", "updated_at": null, "description": "", "interface": [], "libs": [ { "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "created_at": "2015-02-10 14:25:48", "id": "0ff4ac10-94a4-4e25-9ac9-603afe27b100", "name": "binary-job.jar", "updated_at": null, "description": "", "url": "swift://Edp-test-c71e6bce.sahara/binary-job.jar" } ], "type": "MapReduce", "mains": [], "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/plugins/0000775000175000017500000000000013656752227021527 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/plugins/plugin-update-response.json0000664000175000017500000000172213656752032027030 0ustar zuulzuul00000000000000{ "plugin": { "plugin_labels": { "hidden": { "status": true, "mutable": true, "description": "Existence of plugin or its version is hidden, but still can be used for cluster creation by CLI and directly by client." }, "enabled": { "status": false, "mutable": true, "description": "Plugin or its version is enabled and can be used by user." } }, "description": "It's a fake plugin that aimed to work on the CirrOS images. It doesn't install Hadoop. It's needed to be able to test provisioning part of Sahara codebase itself.", "versions": [ "0.1" ], "tenant_id": "993f53c1f51845e48e013aeb632358d8", "title": "Fake Plugin", "version_labels": { "0.1": { "enabled": { "status": true, "mutable": true, "description": "Plugin or its version is enabled and can be used by user." } } }, "name": "fake" } } sahara-12.0.0/api-ref/source/v1.1/samples/plugins/plugin-show-response.json0000664000175000017500000000060013656752032026520 0ustar zuulzuul00000000000000{ "plugin": { "name": "vanilla", "versions": [ "1.2.1", "2.4.1", "2.6.0" ], "title": "Vanilla Apache Hadoop", "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component." } } sahara-12.0.0/api-ref/source/v1.1/samples/plugins/plugin-update-request.json0000664000175000017500000000013413656752032026656 0ustar zuulzuul00000000000000{ "plugin_labels": { "enabled": { "status": false } } } sahara-12.0.0/api-ref/source/v1.1/samples/plugins/plugin-version-show-response.json0000664000175000017500000000552713656752032030220 0ustar zuulzuul00000000000000{ "plugin": { "name": "vanilla", "versions": [ "1.2.1", "2.4.1", "2.6.0" ], "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "required_image_tags": [ "vanilla", "2.6.0" ], "node_processes": { "JobFlow": [ "oozie" ], "HDFS": [ "namenode", "datanode", "secondarynamenode" ], "YARN": [ "resourcemanager", "nodemanager" ], "MapReduce": [ "historyserver" ], "Hadoop": [], "Hive": [ "hiveserver" ] }, "configs": [ { "default_value": "/tmp/hadoop-${user.name}", "name": "hadoop.tmp.dir", "priority": 2, "config_type": "string", "applicable_target": "HDFS", "is_optional": true, "scope": "node", "description": "A base for other temporary directories." }, { "default_value": true, "name": "hadoop.native.lib", "priority": 2, "config_type": "bool", "applicable_target": "HDFS", "is_optional": true, "scope": "node", "description": "Should native hadoop libraries, if present, be used." }, { "default_value": 1024, "name": "NodeManager Heap Size", "config_values": null, "priority": 1, "config_type": "int", "applicable_target": "YARN", "is_optional": false, "scope": "node", "description": null }, { "default_value": true, "name": "Enable Swift", "config_values": null, "priority": 1, "config_type": "bool", "applicable_target": "general", "is_optional": false, "scope": "cluster", "description": null }, { "default_value": true, "name": "Enable MySQL", "config_values": null, "priority": 1, "config_type": "bool", "applicable_target": "general", "is_optional": true, "scope": "cluster", "description": null } ], "title": "Vanilla Apache Hadoop" } } sahara-12.0.0/api-ref/source/v1.1/samples/plugins/plugins-list-response.json0000664000175000017500000000261713656752032026710 0ustar zuulzuul00000000000000{ "plugins": [ { "name": "vanilla", "description": "The Apache Vanilla plugin provides the ability to launch upstream Vanilla Apache Hadoop cluster without any management consoles. It can also deploy the Oozie component.", "versions": [ "1.2.1", "2.4.1", "2.6.0" ], "title": "Vanilla Apache Hadoop" }, { "name": "hdp", "description": "The Hortonworks Sahara plugin automates the deployment of the Hortonworks Data Platform (HDP) on OpenStack.", "versions": [ "1.3.2", "2.0.6" ], "title": "Hortonworks Data Platform" }, { "name": "spark", "description": "This plugin provides an ability to launch Spark on Hadoop CDH cluster without any management consoles.", "versions": [ "1.0.0", "0.9.1" ], "title": "Apache Spark" }, { "name": "cdh", "description": "The Cloudera Sahara plugin provides the ability to launch the Cloudera distribution of Apache Hadoop (CDH) with Cloudera Manager management console.", "versions": [ "5", "5.3.0" ], "title": "Cloudera Plugin" } ] } sahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/0000775000175000017500000000000013656752227024121 5ustar zuulzuul00000000000000././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-update-request.jsonsahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-update-request.js0000664000175000017500000000033313656752032033306 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "hadoop_version": "2.7.1", "node_processes": [ "datanode" ], "name": "new", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "flavor_id": "2" } ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-create-request.jsonsahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-create-request.js0000664000175000017500000000044313656752032033271 0ustar zuulzuul00000000000000{ "plugin_name": "vanilla", "hadoop_version": "2.7.1", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "name": "master", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "flavor_id": "2" } ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-show-response.jsonsahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-show-response.jso0000664000175000017500000000216713656752032033340 0ustar zuulzuul00000000000000{ "node_group_template": { "is_public": false, "image_id": null, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "description": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "is_protected": false, "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "security_groups": null, "volume_type": null } } ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-update-response.jsonsahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-update-response.j0000664000175000017500000000167013656752032033276 0ustar zuulzuul00000000000000{ "node_group_template": { "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "is_protected": false, "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "hadoop_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "security_groups": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "new", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } } ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-templates-list-response.jsonsahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-templates-list-response.js0000664000175000017500000000510013656752032033325 0ustar zuulzuul00000000000000{ "node_group_templates": [ { "is_public": false, "image_id": null, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "description": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "is_protected": false, "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "security_groups": null, "volume_type": null }, { "is_public": false, "image_id": null, "tenant_id": "808d5032ea0446889097723bfc8e919d", "shares": null, "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "flavor_id": "2", "id": "846edb31-add5-46e6-a4ee-a4c339f99251", "description": null, "hadoop_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:27:00", "is_protected": false, "updated_at": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "worker", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "datanode", "nodemanager" ], "volumes_size": 0, "volume_local_to_instance": false, "security_groups": null, "volume_type": null } ], "markers": { "prev":"39dfc852-8588-4b61-8d2b-eb08a67ab240", "next":"eaa0bd97-ab54-43df-83ab-77a9774d7358" } } ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-create-response.jsonsahara-12.0.0/api-ref/source/v1.1/samples/node-group-templates/node-group-template-create-response.j0000664000175000017500000000201413656752032033250 0ustar zuulzuul00000000000000{ "node_group_template": { "is_public": false, "tenant_id": "808d5032ea0446889097723bfc8e919d", "floating_ip_pool": "033debed-aeb8-488c-b7d0-adb74c61faa5", "node_configs": {}, "auto_security_group": false, "is_default": false, "availability_zone": null, "plugin_name": "vanilla", "is_protected": false, "flavor_id": "2", "id": "0bb9f1a4-0c44-4dc5-9452-6741c62ed9ae", "hadoop_version": "2.7.1", "use_autoconfig": true, "volumes_availability_zone": null, "created_at": "2015-09-14T10:20:11", "security_groups": null, "volumes_per_node": 0, "is_proxy_gateway": false, "name": "master", "volume_mount_prefix": "/volumes/disk", "node_processes": [ "namenode", "resourcemanager", "oozie", "historyserver" ], "volumes_size": 0, "volume_local_to_instance": false, "volume_type": null } } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/0000775000175000017500000000000013656752227022440 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-source-register-swift-response.json0000664000175000017500000000064113656752032032345 0ustar zuulzuul00000000000000{ "data_source": { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:18:10.691493", "id": "953831f2-0852-49d8-ac71-af5805e25256", "updated_at": null, "name": "swift_input", "description": "This is input", "url": "swift://container/text", "type": "swift" } } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-source-update-response.json0000664000175000017500000000067713656752032030662 0ustar zuulzuul00000000000000{ "data_source": { "is_public": true, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-09-15 12:32:24.847493", "id": "953831f2-0852-49d8-ac71-af5805e25256", "updated_at": "2015-09-15 12:34:42.597435", "name": "swift_input", "description": "This is public input", "url": "swift://container/text", "type": "swift" } } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-source-show-response.json0000664000175000017500000000064113656752032030347 0ustar zuulzuul00000000000000{ "data_source": { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:18:10.691493", "id": "953831f2-0852-49d8-ac71-af5805e25256", "updated_at": null, "name": "swift_input", "description": "This is input", "url": "swift://container/text", "type": "swift" } } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-source-register-hdfs-request.json0000664000175000017500000000022713656752032031767 0ustar zuulzuul00000000000000{ "description": "This is hdfs input", "url": "hdfs://test-master-node:8020/user/hadoop/input", "type": "hdfs", "name": "hdfs_input" } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-sources-list-response.json0000664000175000017500000000165213656752032030530 0ustar zuulzuul00000000000000{ "data_sources": [ { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:18:10", "id": "953831f2-0852-49d8-ac71-af5805e25256", "name": "swift_input", "updated_at": null, "description": "This is input", "url": "swift://container/text", "type": "swift" }, { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:09:36", "id": "d7fffe9c-3b42-46a9-8be8-e98f586fa7a9", "name": "hdfs_input", "updated_at": null, "description": "This is hdfs input", "url": "hdfs://test-master-node:8020/user/hadoop/input", "type": "hdfs" } ] } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-source-register-hdfs-response.json0000664000175000017500000000067413656752032032143 0ustar zuulzuul00000000000000{ "data_source": { "is_public": false, "tenant_id": "9cd1314a0a31493282b6712b76a8fcda", "is_protected": false, "created_at": "2015-03-26 11:09:36.148464", "id": "d7fffe9c-3b42-46a9-8be8-e98f586fa7a9", "updated_at": null, "name": "hdfs_input", "description": "This is hdfs input", "url": "hdfs://test-master-node:8020/user/hadoop/input", "type": "hdfs" } } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-source-update-request.json0000664000175000017500000000011013656752032030472 0ustar zuulzuul00000000000000{ "description": "This is public input", "is_protected": true } sahara-12.0.0/api-ref/source/v1.1/samples/data-sources/data-source-register-swift-request.json0000664000175000017500000000031713656752032032177 0ustar zuulzuul00000000000000{ "description": "This is input", "url": "swift://container/text", "credentials": { "password": "swordfish", "user": "dev" }, "type": "swift", "name": "swift_input" } sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/0000775000175000017500000000000013656752227022412 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/show-response.json0000664000175000017500000000063413656752032026116 0ustar zuulzuul00000000000000{ "job_binary": { "is_public": false, "description": "an example jar file", "url": "swift://container/jar-example.jar", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 14:25:04.970513", "updated_at": null, "id": "a716a9cd-9add-4b12-b1b6-cdb71aaef350", "name": "jar-example.jar", "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/list-response.json0000664000175000017500000000243513656752032026112 0ustar zuulzuul00000000000000{ "binaries": [ { "is_public": false, "description": "", "url": "internal-db://d2498cbf-4589-484a-a814-81436c18beb3", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 12:36:59.375060", "updated_at": null, "id": "84248975-3c82-4206-a58d-6e7fb3a563fd", "name": "example.pig", "is_protected": false }, { "is_public": false, "description": "", "url": "internal-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 12:43:52.265899", "updated_at": null, "id": "508fc62d-1d58-4412-b603-bdab307bb926", "name": "udf.jar", "is_protected": false }, { "is_public": false, "description": "", "url": "swift://container/jar-example.jar", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 14:25:04.970513", "updated_at": null, "id": "a716a9cd-9add-4b12-b1b6-cdb71aaef350", "name": "jar-example.jar", "is_protected": false } ] } sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/show-data-response0000664000175000017500000000024013656752032026046 0ustar zuulzuul00000000000000A = load '$INPUT' using PigStorage(':') as (fruit: chararray); B = foreach A generate com.hadoopbook.pig.Trim(fruit); store B into '$OUTPUT' USING PigStorage();sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/create-response.json0000664000175000017500000000063513656752032026402 0ustar zuulzuul00000000000000{ "job_binary": { "is_public": false, "description": "This is a job binary", "url": "swift://container/jar-example.jar", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2013-10-15 14:49:20.106452", "id": "07f86352-ee8a-4b08-b737-d705ded5ff9c", "updated_at": null, "name": "jar-example.jar", "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/create-request.json0000664000175000017500000000031413656752032026226 0ustar zuulzuul00000000000000{ "url": "swift://container/jar-example.jar", "name": "jar-example.jar", "description": "This is a job binary", "extra": { "password": "swordfish", "user": "admin" } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/update-response.json0000664000175000017500000000065113656752032026417 0ustar zuulzuul00000000000000{ "job_binary": { "is_public": false, "description": "This is a new job binary", "url": "swift://container/new-jar-example.jar", "tenant_id": "11587919cc534bcbb1027a161c82cf58", "created_at": "2015-09-15 12:42:51.421542", "updated_at": null, "id": "b713d7ad-4add-4f12-g1b6-cdg71aaef350", "name": "new-jar-example.jar", "is_protected": false } } sahara-12.0.0/api-ref/source/v1.1/samples/job-binaries/update-request.json0000664000175000017500000000021113656752032026241 0ustar zuulzuul00000000000000{ "url": "swift://container/new-jar-example.jar", "name": "new-jar-example.jar", "description": "This is a new job binary" } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/0000775000175000017500000000000013656752227022776 5ustar zuulzuul00000000000000sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/image-tags-add-response.json0000664000175000017500000000145713656752032030272 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-03-24T10:18:33Z", "metadata": { "_sahara_tag_vanilla": true, "_sahara_description": "Ubuntu image for Hadoop 2.7.1", "_sahara_username": "ubuntu", "_sahara_tag_some_other_tag": true, "_sahara_tag_2.7.1": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "some_other_tag", "2.7.1" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.6.0-ubuntu-14.04", "description": "Ubuntu image for Hadoop 2.7.1", "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/image-tags-delete-request.json0000664000175000017500000000006113656752032030624 0ustar zuulzuul00000000000000{ "tags": [ "some_other_tag" ] } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/image-tags-add-request.json0000664000175000017500000000012513656752032030113 0ustar zuulzuul00000000000000{ "tags": [ "vanilla", "2.7.1", "some_other_tag" ] } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/image-show-response.json0000664000175000017500000000120213656752032027552 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-02-03T10:29:32Z", "metadata": { "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true, "_sahara_tag_2.6.0": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "2.6.0" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.6.0-ubuntu-14.04", "description": null, "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/image-register-request.json0000664000175000017500000000012113656752032030247 0ustar zuulzuul00000000000000{ "username": "ubuntu", "description": "Ubuntu image for Hadoop 2.7.1" } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/images-list-response.json0000664000175000017500000000261013656752032027734 0ustar zuulzuul00000000000000{ "images": [ { "name": "ubuntu-vanilla-2.7.1", "id": "4118a476-dfdc-4b0e-8d5c-463cba08e9ae", "created": "2015-08-06T08:17:14Z", "metadata": { "_sahara_tag_2.7.1": true, "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true }, "username": "ubuntu", "progress": 100, "OS-EXT-IMG-SIZE:size": 998716928, "status": "ACTIVE", "minDisk": 0, "tags": [ "vanilla", "2.7.1" ], "updated": "2015-09-04T09:35:09Z", "minRam": 0, "description": null }, { "name": "cdh-latest", "id": "ff74035b-9da7-4edf-981d-57f270ed337d", "created": "2015-09-04T11:56:44Z", "metadata": { "_sahara_username": "ubuntu", "_sahara_tag_5.4.0": true, "_sahara_tag_cdh": true }, "username": "ubuntu", "progress": 100, "OS-EXT-IMG-SIZE:size": 3281453056, "status": "ACTIVE", "minDisk": 0, "tags": [ "5.4.0", "cdh" ], "updated": "2015-09-04T12:46:42Z", "minRam": 0, "description": null } ] } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/image-register-response.json0000664000175000017500000000134113656752032030422 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-03-24T10:05:10Z", "metadata": { "_sahara_description": "Ubuntu image for Hadoop 2.7.1", "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true, "_sahara_tag_2.7.1": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "2.7.1" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.7.1-ubuntu-14.04", "description": "Ubuntu image for Hadoop 2.7.1", "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v1.1/samples/image-registry/image-tags-delete-response.json0000664000175000017500000000134113656752032030774 0ustar zuulzuul00000000000000{ "image": { "updated": "2015-03-24T10:19:28Z", "metadata": { "_sahara_description": "Ubuntu image for Hadoop 2.7.1", "_sahara_username": "ubuntu", "_sahara_tag_vanilla": true, "_sahara_tag_2.7.1": true }, "id": "bb8d12b5-f9bb-49f0-aecb-739b8a9bec89", "minDisk": 0, "status": "ACTIVE", "tags": [ "vanilla", "2.7.1" ], "minRam": 0, "progress": 100, "username": "ubuntu", "created": "2015-02-03T10:28:39Z", "name": "sahara-vanilla-2.7.1-ubuntu-14.04", "description": "Ubuntu image for Hadoop 2.7.1", "OS-EXT-IMG-SIZE:size": 1101856768 } } sahara-12.0.0/api-ref/source/v1.1/index.rst0000664000175000017500000000073213656752032020237 0ustar zuulzuul00000000000000:tocdepth: 3 ------------------------ Data Processing API v1.1 ------------------------ .. rest_expand_all:: .. include:: cluster-templates.inc .. include:: clusters.inc .. include:: data-sources.inc .. include:: event-log.inc .. include:: image-registry.inc .. include:: job-binaries.inc .. include:: job-executions.inc .. include:: job-types.inc .. include:: job-binary-internals.inc .. include:: jobs.inc .. include:: node-group-templates.inc .. include:: plugins.inc sahara-12.0.0/api-ref/source/v1.1/job-executions.inc0000664000175000017500000001376413656752032022040 0ustar zuulzuul00000000000000.. -*- rst -*- ============== Job executions ============== A job execution object represents a Hadoop job that runs on a cluster. A job execution polls the status of a running job and reports it to the user. Also a user can cancel a running job. Refresh job execution status ============================ .. rest_method:: GET /v1.1/{project_id}/job-executions/{job_execution_id}/refresh-status Refreshes the status of and shows information for a job execution. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_execution_id: url_job_execution_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - info: info - output_id: output_id - start_time: start_time - job_id: job_id - updated_at: updated_at - tenant_id: tenant_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_execution_is_public - input_id: input_id - configs: configs - job_execution: job_execution - id: job_execution_id Response Example ---------------- .. literalinclude:: samples/job-executions/job-ex-response.json :language: javascript List job executions =================== .. rest_method:: GET /v1.1/{project_id}/job-executions Lists available job executions. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_job_execution Response Parameters ------------------- .. rest_parameters:: parameters.yaml - markers: markers - prev: prev - next: next - info: info - output_id: output_id - start_time: start_time - job_id: job_id - updated_at: updated_at - tenant_id: tenant_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_execution_is_public - input_id: input_id - configs: configs - job_execution: job_execution - id: job_execution_id - job_executions: job_executions Response Example ---------------- .. rest_method:: /v1.1/{project_id}/job-executions .. literalinclude:: samples/job-executions/list-response.json :language: javascript Show job execution details ========================== .. rest_method:: GET /v1.1/{project_id}/job-executions/{job_execution_id} Shows details for a job execution, by ID. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_execution_id: url_job_execution_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - info: info - output_id: output_id - start_time: start_time - job_id: job_id - updated_at: updated_at - tenant_id: tenant_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_execution_is_public - input_id: input_id - configs: configs - job_execution: job_execution - id: job_execution_id Response Example ---------------- .. literalinclude:: samples/job-executions/job-ex-response.json :language: javascript Delete job execution ==================== .. rest_method:: DELETE /v1.1/{project_id}/job-executions/{job_execution_id} Deletes a job execution. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_execution_id: url_job_execution_id Update job execution ==================== .. rest_method:: PATCH /v1.1/{project_id}/job-executions/{job_execution_id} Updates a job execution. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_execution_id: url_job_execution_id Request Example --------------- .. literalinclude:: samples/job-executions/job-ex-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - info: info - output_id: output_id - start_time: start_time - job_id: job_id - updated_at: updated_at - tenant_id: tenant_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_execution_is_public - input_id: input_id - configs: configs - job_execution: job_execution - id: job_execution_id Cancel job execution ==================== .. rest_method:: GET /v1.1/{project_id}/job-executions/{job_execution_id}/cancel Cancels a job execution. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_execution_id: url_job_execution_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - info: info - output_id: output_id - start_time: start_time - job_id: job_id - updated_at: updated_at - tenant_id: tenant_id - created_at: created_at - args: args - data_source_urls: data_source_urls - return_code: return_code - oozie_job_id: oozie_job_id - is_protected: is_protected_3 - cluster_id: cluster_id - end_time: end_time - params: params - is_public: job_execution_is_public - input_id: input_id - configs: configs - job_execution: job_execution - id: job_execution_id Response Example ---------------- .. literalinclude:: samples/job-executions/cancel-response.json :language: javascript sahara-12.0.0/api-ref/source/v1.1/jobs.inc0000664000175000017500000000755013656752032020033 0ustar zuulzuul00000000000000.. -*- rst -*- ==== Jobs ==== A job object lists the binaries that a job needs to run. To run a job, you must specify data sources and job parameters. You can run a job on an existing or new transient cluster. Run job ======= .. rest_method:: POST /v1.1/{project_id}/jobs/{job_id}/execute Runs a job. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_id: url_job_id Request Example --------------- .. literalinclude:: samples/jobs/job-execute-request.json :language: javascript List jobs ========= .. rest_method:: GET /v1.1/{project_id}/jobs Lists all jobs. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - limit: limit - marker: marker - sort_by: sort_by_jobs Response Parameters ------------------- .. rest_parameters:: parameters.yaml - jobs: jobs - description: job_description - tenant_id: tenant_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_id - name: job_name - markers: markers - prev: prev - next: next Response Example ---------------- ..rest_method:: GET /v1.1/{project_id}/jobs?limit=2 .. literalinclude:: samples/jobs/jobs-list-response.json :language: javascript Create job ========== .. rest_method:: POST /v1.1/{project_id}/jobs Creates a job object. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id Request Example --------------- .. literalinclude:: samples/jobs/job-create-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_description - tenant_id: tenant_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_id - name: job_name Show job details ================ .. rest_method:: GET /v1.1/{project_id}/jobs/{job_id} Shows details for a job. Normal response codes: 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_id: url_job_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_description - tenant_id: tenant_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_id - name: job_name Response Example ---------------- .. literalinclude:: samples/jobs/job-show-response.json :language: javascript Remove job ========== .. rest_method:: DELETE /v1.1/{project_id}/jobs/{job_id} Removes a job. Normal response codes:204 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_id: url_job_id Update job object ================= .. rest_method:: PATCH /v1.1/{project_id}/jobs/{job_id} Updates a job object. Normal response codes:202 Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - job_id: url_job_id Request Example --------------- .. literalinclude:: samples/jobs/job-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - description: job_description - tenant_id: tenant_id - created_at: created_at - mains: mains - updated_at: updated_at - libs: libs - is_protected: object_is_protected - interface: interface - is_public: object_is_public - type: type - id: job_id - name: job_name sahara-12.0.0/api-ref/source/v1.1/event-log.inc0000664000175000017500000000116513656752032020772 0ustar zuulzuul00000000000000.. -*- rst -*- ========= Event log ========= The event log feature provides information about cluster provisioning. In the event of errors, the event log shows the reason for the failure. Show progress ============= .. rest_method:: GET /v1.1/{project_id}/clusters/{cluster_id} Shows provisioning progress of cluster. Normal response codes: 200 Error response codes: Request ------- .. rest_parameters:: parameters.yaml - project_id: url_project_id - cluster_id: cluster_id Response Example ---------------- .. literalinclude:: samples/event-log/cluster-progress-response.json :language: javascript sahara-12.0.0/api-ref/source/index.rst0000664000175000017500000000027113656752032017550 0ustar zuulzuul00000000000000=================== Data Processing API =================== Contents: API content can be searched using the :ref:`search`. .. toctree:: :maxdepth: 2 v1.1/index v2/index sahara-12.0.0/PKG-INFO0000664000175000017500000000402613656752227014171 0ustar zuulzuul00000000000000Metadata-Version: 1.2 Name: sahara Version: 12.0.0 Summary: Sahara project Home-page: https://docs.openstack.org/sahara/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: Apache Software License Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/sahara.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on OpenStack Data Processing ("Sahara") project ============================================ Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara Storyboard project: https://storyboard.openstack.org/#!/project/935 Sahara docs site: https://docs.openstack.org/sahara/latest/ Roadmap: https://wiki.openstack.org/wiki/Sahara/Roadmap Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html Source: https://opendev.org/openstack/sahara Bugs and feature requests: https://storyboard.openstack.org/#!/project/935 Release notes: https://docs.openstack.org/releasenotes/sahara/ License ------- Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 Platform: UNKNOWN Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Requires-Python: >=3.6 sahara-12.0.0/bindep.txt0000664000175000017500000000157413656752032015075 0ustar zuulzuul00000000000000# This file contains runtime (non-python) dependencies # More info at: https://docs.openstack.org/infra/bindep/readme.html libssl-dev [platform:dpkg] openssl-devel [platform:rpm] # updates of the localized release notes require msgmerge gettext # Define the basic (test) requirements extracted from bindata-fallback.txt # - mysqladmin and psql mariadb [platform:rpm] mariadb-devel [platform:rpm] mariadb-server [platform:rpm] dev-db/mariadb [platform:gentoo] mysql-client [platform:dpkg] mysql-server [platform:dpkg] postgresql postgresql-client [platform:dpkg] libpq-dev [platform:dpkg] postgresql-server [platform:rpm] postgresql-devel [platform:rpm] # The Python binding for libguestfs are used by the sahara-image-pack # command. python-guestfs [platform:dpkg] libguestfs-xfs [platform:dpkg] python3-libguestfs [platform:rpm] libguestfs-xfs [platform:redhat] xfsprogs [platform:suse] sahara-12.0.0/test-requirements.txt0000664000175000017500000000123513656752032017326 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking>=3.0,<3.1.0 # Apache-2.0 PyMySQL>=0.7.6 # MIT License bandit>=1.1.0,<1.6.0 # Apache-2.0 bashate>=0.5.1 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 doc8>=0.6.0 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD oslotest>=3.2.0 # Apache-2.0 stestr>=1.0.0 # Apache-2.0 psycopg2>=2.7.3 # LGPL/ZPL pylint==1.4.5 # GPLv2 testresources>=2.0.0 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT python-saharaclient>=1.4.0 # Apache-2.0 sahara-12.0.0/lower-constraints.txt0000664000175000017500000000556213656752032017332 0ustar zuulzuul00000000000000alabaster==0.7.10 alembic==0.8.10 amqp==2.2.2 appdirs==1.4.3 asn1crypto==0.24.0 astroid==1.3.8 Babel==2.3.4 bandit==1.1.0 bashate==0.5.1 bcrypt==3.1.4 botocore==1.5.1 cachetools==2.0.1 castellan==0.16.0 certifi==2018.1.18 cffi==1.11.5 chardet==3.0.4 click==6.7 cliff==2.11.0 cmd2==0.8.1 contextlib2==0.5.5 coverage==4.0 cryptography==2.1.4 debtcollector==1.19.0 decorator==4.2.1 deprecation==2.0 doc8==0.6.0 docutils==0.14 dogpile.cache==0.6.5 dulwich==0.19.0 enum-compat==0.0.2 eventlet==0.18.2 extras==1.0.0 fasteners==0.14.1 fixtures==3.0.0 flake8==2.6.2 Flask==1.0.2 future==0.16.0 futurist==1.6.0 gitdb2==2.0.3 GitPython==2.1.8 greenlet==0.4.13 hacking==1.1.0 idna==2.6 imagesize==1.0.0 iso8601==0.1.11 itsdangerous==0.24 Jinja2==2.10 jmespath==0.9.3 jsonpatch==1.21 jsonpointer==2.0 jsonschema==2.6.0 keystoneauth1==3.4.0 keystonemiddleware==4.17.0 kombu==4.1.0 linecache2==1.0.0 logilab-common==1.4.1 Mako==1.0.7 MarkupSafe==1.0 mccabe==0.2.1 microversion-parse==0.2.1 mock==2.0.0 monotonic==1.4 mox3==0.25.0 msgpack==0.5.6 munch==2.2.0 netaddr==0.7.19 netifaces==0.10.6 openstackdocstheme==1.20.0 openstacksdk==0.12.0 os-api-ref==1.6.0 os-client-config==1.29.0 os-service-types==1.2.0 osc-lib==1.10.0 oslo.cache==1.29.0 oslo.concurrency==3.26.0 oslo.config==5.2.0 oslo.context==2.19.2 oslo.db==4.27.0 oslo.i18n==3.15.3 oslo.log==3.36.0 oslo.messaging==5.29.0 oslo.middleware==3.31.0 oslo.policy==1.30.0 oslo.rootwrap==5.8.0 oslo.serialization==2.18.0 oslo.service==1.24.0 oslo.upgradecheck==0.1.0 oslo.utils==3.33.0 oslotest==3.2.0 packaging==17.1 paramiko==2.0.0 Paste==2.0.3 PasteDeploy==1.5.2 pbr==2.0.0 pika-pool==0.1.3 pika==0.10.0 prettytable==0.7.2 psycopg2==2.7.3 pyasn1==0.4.2 pycadf==2.7.0 pycparser==2.18 pycodestyle==2.4.0 pyflakes==0.8.1 Pygments==2.2.0 pyinotify==0.9.6 pylint==1.4.5 PyMySQL==0.7.6 PyNaCl==1.2.1 pyOpenSSL==17.5.0 pyparsing==2.2.0 pyperclip==1.6.0 python-barbicanclient==4.6.0 python-cinderclient==3.3.0 python-dateutil==2.7.0 python-editor==1.0.3 python-glanceclient==2.8.0 python-heatclient==1.10.0 python-keystoneclient==3.8.0 python-manilaclient==1.16.0 python-mimeparse==1.6.0 python-neutronclient==6.7.0 python-novaclient==9.1.0 python-openstackclient==3.14.0 python-saharaclient==1.4.0 python-subunit==1.2.0 python-swiftclient==3.2.0 pytz==2018.3 PyYAML==3.12 reno==2.5.0 repoze.lru==0.7 requests==2.14.2 requestsexceptions==1.4.0 restructuredtext-lint==1.1.3 rfc3986==1.1.0 Routes==2.4.1 simplejson==3.13.2 six==1.10.0 smmap2==2.0.3 snowballstemmer==1.2.1 Sphinx==1.6.2 sphinxcontrib-httpdomain==1.3.0 sphinxcontrib-websupport==1.0.1 sqlalchemy-migrate==0.11.0 SQLAlchemy==1.0.10 sqlparse==0.2.4 statsd==3.2.2 stestr==1.0.0 stevedore==1.20.0 Tempita==0.5.2 tenacity==4.9.0 testresources==2.0.0 testscenarios==0.4 testtools==2.2.0 tooz==1.58.0 traceback2==1.4.0 unittest2==1.1.0 urllib3==1.22 vine==1.1.4 voluptuous==0.11.1 warlock==1.3.0 WebOb==1.7.1 Werkzeug==0.14.1 wrapt==1.10.11 sahara-12.0.0/AUTHORS0000664000175000017500000002625513656752226014153 0ustar zuulzuul00000000000000Abbass Marouni Abhishek Chanda Adrien Vergé Akanksha Agrawal Alberto Planas Alexander Aleksiyants Alexander Ignatov Alexander Kuznetsov Alexandra Settle Alina Nesterova Alok Jani Andreas Jaeger Andreas Jaeger Andrew Lazarev Andrey Pavlov Anh Tran Anusree ArchiFleKs Artem Osadchiy Artem Osadchyi Ashish Billore Atsushi SAKAI Bass T Bo Wang Bob Nettleton Brandon James Cao Xuan Hoang Chad Roberts Chandan Kumar Chang Bo Guo ChangBo Guo(gcb) Charles Short Chris Buccella Chris Buccella Christian Berendt Christian Berendt Colleen Murphy Corey Bryant Daniel Gonzalez Daniele Venzano Dao Cong Tien Davanum Srinivas Deliang Fan Demid Dementev Denis Egorenko DennyZhang Dexter Fryar Dina Belova Dirk Mueller Dmitry Mescheryakov Dong Ma Doug Hellmann Duan Jiong Elise Gafford Emilien Macchi Eohyung Lee Erik Bergenholtz Ernst Sjöstrand Ethan Gafford Evgeny Sikachev Fang Jinxing Fengqian Gao Flavio Percoco Francesco Vollero Francois Deppierraz Ghanshyam Mann Graham Hayes Grigoriy Roghkov Grigoriy Rozhkov Guo Shan Gyorgy Szombathelyi Gökhan IŞIK Ha Van Tu Hareesh Puthalath He Yongli Hironori Shiina Hongbin Lu Hui HX Xiang Ian Wienand Ihar Hrachyshka Ilya Tyaptin Ivoline Ngong Iwona Kotlarska Jacob Bin Wang James E. Blair Jamie Lennox Javeme Javier Pena Jaxon Wang Jeremy Freudberg Jeremy Liu Jeremy Stanley Jesse Pretorius JiHyunSong Jinay Vora Jinxing Fang Joe Gordon John Garbutt John Speidel Jon Maron Jonathan Halterman Jonathan Jozwiak Joseph D Natoli JuPing Julian Sy Julien Danjou Kazuki OIKAWA Kazuki Oikawa Kazuki Oikawa Ken Chen Kevin Vasko Khanh-Toan Tran Konovalov-Nik Lawrence Davison Li, Chen LiuNanke Luigi Toscano Lujin Luo Luong Anh Tuan M V P Nitesh Manishanker Talusani Marc Solanas Maria Malyarova Marianne Linhares Marianne Linhares Monteiro Markus Zoeller Martin Kletzander Mate Lakat Matthew Edmonds Matthew Farrellee Matthew Treinish Maxence Dalmais Michael Ionkin Michael Krotscheck Michael Lelyakin Michael McCune Michael McCune Mikhail Mikhail Lelyakin Mimansa Mohammed Naser Monty Taylor Nadya Privalova Nam Nguyen Hoai Ngo Quoc Cuong Nguyen Hai Nguyen Hai Truong Nguyen Hung Phuong Nicolas Haller Nicolas Haller Nikita Konovalov Nikolay Mahotkin Nikolay Starodubtsev Nirmal Ranganathan Oleg Borisenko Ondřej Nový OpenStack Release Bot PanFengyun Patrick Amor PavlovAndrey Pedro Navarro Pierre Padrixe Pritesh Kothari Rafik Renat Akhmerov Rich Bowen Robert Levas Ronald Bradford Ruslan Kamaldinov Sarvesh Ranjan Sean Dague Sean McGinnis Sergey Gotliv Sergey Lukjanov Sergey Lukjanov Sergey Lukjanov Sergey Reshetnyak Sergey Reshetnyak Sergey Vilgelm Shail Bhargava Sharan Kumar Monikanta Rajan Shilla Saebi Shu Yingya Shuquan Huang Sofiia Kostiuchenko Steve Kowalik SunAnChen Susanne Balle Tang Chen Telles Nobrega Telles Nobrega Telles Nobrega Tetiana Lashchova Thierry Carrez Thomas Bechtold Thomas Goirand Tim Kelsey Tim Millican Tin Lam Tingting Bao Travis McPeak Trevor McKay Vadim Rovachev Velmurugan Kumar Venkateswarlu Pallamala Victor Sergeyev Vinod Pandarinathan Vitaliy Levitski Vitaly Gridnev Vitaly Gridnev William Stevenson Xi Yang XiaBing Yao Xinyuan Huang Yaroslav Lobankov Yuanbin.Chen Zhao Lei Zhenguo Niu ZhiQiang Fan ZhongShengping Zhongyue Luo Zhuang Changkun akhiljain23 artemosadchiy bhujay caowei caoyue chao liu chenpengzi <1523688226@qq.com> chenxing deepakmourya dmitryme gaobin gecong1973 ghanshyam groghkov huang.zhiping iberezovskiy inspurericzhang jacobliberman jeremyfreudberg jiasen.lin kangyufei kavithahr kk kk lcsong lei-zhang-99cloud leiyashuai liuqing llg8212 lu huichun luhuichun makocchi mandar mathspanda melissaml msionkin nizam npraveen35 pangliye pawnesh.kumar pengyuesheng pratik-gadiya qiaomin ricolin shangxiaobj sharat.sharma singinforest skostiuchenkoHDP sun cheng sunyandi sven mark svenmark tanlin taoguo venkatamahesh vrovachev wangdequn wanghuagong wangqi weiting-chen wu.chunyang xiexs xuhaigang yangxurong yangyapeng yangyong yaseminti yatin yingya.shu yrunts yrunts zhang.lei zhanghongtao zhangxuanyuan zhangyanxian zhaorenming zhouyunfeng zhufl zhuli zhulingjie “leiyashuai” <“leiyashuai@inspur.com”> sahara-12.0.0/sahara/0000775000175000017500000000000013656752227014331 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/__init__.py0000664000175000017500000000000013656752032016422 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/version.py0000664000175000017500000000121513656752032016361 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from pbr import version version_info = version.VersionInfo('sahara') sahara-12.0.0/sahara/utils/0000775000175000017500000000000013656752227015471 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/__init__.py0000664000175000017500000000000013656752032017562 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/cluster.py0000664000175000017500000001460713656752032017526 0ustar zuulzuul00000000000000# Copyright (c) 2015 Intel Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import socket from keystoneauth1 import exceptions as keystone_ex from oslo_config import cfg from oslo_log import log as logging from six.moves.urllib import parse from sahara import conductor as c from sahara import context from sahara import exceptions as e from sahara.utils.notification import sender from sahara.utils.openstack import base as auth_base conductor = c.API LOG = logging.getLogger(__name__) CONF = cfg.CONF # cluster status CLUSTER_STATUS_VALIDATING = "Validating" CLUSTER_STATUS_INFRAUPDATING = "InfraUpdating" CLUSTER_STATUS_SPAWNING = "Spawning" CLUSTER_STATUS_WAITING = "Waiting" CLUSTER_STATUS_PREPARING = "Preparing" CLUSTER_STATUS_CONFIGURING = "Configuring" CLUSTER_STATUS_STARTING = "Starting" CLUSTER_STATUS_ACTIVE = "Active" CLUSTER_STATUS_DECOMMISSIONING = "Decommissioning" CLUSTER_STATUS_ERROR = "Error" CLUSTER_STATUS_DELETING = "Deleting" CLUSTER_STATUS_AWAITINGTERMINATION = "AwaitingTermination" # cluster status -- Instances CLUSTER_STATUS_DELETING_INSTANCES = "Deleting Instances" CLUSTER_STATUS_ADDING_INSTANCES = "Adding Instances" # Scaling status CLUSTER_STATUS_SCALING = "Scaling" CLUSTER_STATUS_SCALING_SPAWNING = (CLUSTER_STATUS_SCALING + ": " + CLUSTER_STATUS_SPAWNING) CLUSTER_STATUS_SCALING_WAITING = (CLUSTER_STATUS_SCALING + ": " + CLUSTER_STATUS_WAITING) CLUSTER_STATUS_SCALING_PREPARING = (CLUSTER_STATUS_SCALING + ": " + CLUSTER_STATUS_PREPARING) # Rollback status CLUSTER_STATUS_ROLLBACK = "Rollback" CLUSTER_STATUS_ROLLBACK_SPAWNING = (CLUSTER_STATUS_ROLLBACK + ": " + CLUSTER_STATUS_SPAWNING) CLUSTER_STATUS_ROLLBACK_WAITING = (CLUSTER_STATUS_ROLLBACK + ": " + CLUSTER_STATUS_WAITING) CLUSTER_STATUS_ROLLBACK__PREPARING = (CLUSTER_STATUS_ROLLBACK + ": " + CLUSTER_STATUS_PREPARING) def change_cluster_status_description(cluster, status_description): try: ctx = context.ctx() return conductor.cluster_update( ctx, cluster, {'status_description': status_description}) except e.NotFoundException: return None def change_cluster_status(cluster, status, status_description=None): ctx = context.ctx() # Update cluster status. Race conditions with deletion are still possible, # but this reduces probability at least. cluster = conductor.cluster_get(ctx, cluster) if cluster else None if status_description is not None: change_cluster_status_description(cluster, status_description) # 'Deleting' is final and can't be changed if cluster is None or cluster.status == CLUSTER_STATUS_DELETING: return cluster update_dict = {"status": status} cluster = conductor.cluster_update(ctx, cluster, update_dict) conductor.cluster_provision_progress_update(ctx, cluster.id) LOG.info("Cluster status has been changed. New status=" "{status}".format(status=cluster.status)) sender.status_notify(cluster.id, cluster.name, cluster.status, "update") return cluster def count_instances(cluster): return sum([node_group.count for node_group in cluster.node_groups]) def check_cluster_exists(cluster): ctx = context.ctx() # check if cluster still exists (it might have been removed) cluster = conductor.cluster_get(ctx, cluster) return cluster is not None def get_instances(cluster, instances_ids=None): inst_map = {} for node_group in cluster.node_groups: for instance in node_group.instances: inst_map[instance.id] = instance if instances_ids is not None: return [inst_map[id] for id in instances_ids] else: return [v for v in inst_map.values()] def clean_cluster_from_empty_ng(cluster): ctx = context.ctx() for ng in cluster.node_groups: if ng.count == 0: conductor.node_group_remove(ctx, ng) def etc_hosts_entry_for_service(service): result = "" try: hostname = parse.urlparse( auth_base.url_for(service_type=service, endpoint_type="publicURL")).hostname except keystone_ex.EndpointNotFound: LOG.debug("Endpoint not found for service: '{}'".format(service)) return result overridden_ip = ( getattr(CONF, "%s_ip_accessible" % service.replace('-', '_'), None) ) if overridden_ip is not None: return "%s %s\n" % (overridden_ip, hostname) try: result = "%s %s\n" % (socket.gethostbyname(hostname), hostname) except socket.gaierror: LOG.warning("Failed to resolve hostname of service: '{}'" .format(service)) result = "# Failed to resolve {} during deployment\n".format(hostname) return result def _etc_hosts_for_services(hosts): # add alias for keystone and swift for service in ["identity", "object-store"]: hosts += etc_hosts_entry_for_service(service) return hosts def _etc_hosts_for_instances(hosts, cluster): for node_group in cluster.node_groups: for instance in node_group.instances: hosts += "%s %s %s\n" % (instance.internal_ip, instance.fqdn(), instance.hostname()) return hosts def generate_etc_hosts(cluster): hosts = "127.0.0.1 localhost\n" if not cluster.use_designate_feature(): hosts = _etc_hosts_for_instances(hosts, cluster) hosts = _etc_hosts_for_services(hosts) return hosts def generate_resolv_conf_diff(curr_resolv_conf): # returns string that contains nameservers # which are lacked in the 'curr_resolve_conf' resolv_conf = "" for ns in CONF.nameservers: if ns not in curr_resolv_conf: resolv_conf += "nameserver {}\n".format(ns) return resolv_conf sahara-12.0.0/sahara/utils/api.py0000664000175000017500000003444013656752032016613 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re import traceback import flask import microversion_parse from oslo_log import log as logging from oslo_middleware import request_id as oslo_req_id import six from werkzeug import datastructures from sahara.api import microversion as mv from sahara import context from sahara import exceptions as ex from sahara.i18n import _ from sahara.utils import types from sahara.utils import wsgi LOG = logging.getLogger(__name__) class Rest(flask.Blueprint): def get(self, rule, status_code=200): return self._mroute('GET', rule, status_code) def post(self, rule, status_code=202): return self._mroute('POST', rule, status_code) def post_file(self, rule, status_code=202): return self._mroute('POST', rule, status_code, file_upload=True) def put(self, rule, status_code=202): return self._mroute('PUT', rule, status_code) def put_file(self, rule, status_code=202): return self._mroute('PUT', rule, status_code, file_upload=True) def delete(self, rule, status_code=204): return self._mroute('DELETE', rule, status_code) def patch(self, rule, status_code=202): return self._mroute('PATCH', rule, status_code) def _mroute(self, methods, rule, status_code=None, **kw): if isinstance(methods, six.string_types): methods = [methods] return self.route(rule, methods=methods, status_code=status_code, **kw) def route(self, rule, **options): status = options.pop('status_code', None) file_upload = options.pop('file_upload', False) def decorator(func): endpoint = options.pop('endpoint', func.__name__) def handler(**kwargs): context.set_ctx(None) LOG.debug("Rest.route.decorator.handler, kwargs={kwargs}" .format(kwargs=kwargs)) _init_resp_type(file_upload) # update status code if status: flask.request.status_code = status kwargs.pop("tenant_id", None) req_id = flask.request.environ.get(oslo_req_id.ENV_REQUEST_ID) auth_plugin = flask.request.environ.get('keystone.token_auth') ctx = context.Context( flask.request.headers['X-User-Id'], flask.request.headers['X-Tenant-Id'], flask.request.headers['X-Auth-Token'], flask.request.headers['X-Service-Catalog'], flask.request.headers['X-User-Name'], flask.request.headers['X-Tenant-Name'], flask.request.headers['X-Roles'].split(','), auth_plugin=auth_plugin, request_id=req_id) context.set_ctx(ctx) try: if flask.request.method in ['POST', 'PUT', 'PATCH']: kwargs['data'] = request_data() return func(**kwargs) except ex.Forbidden as e: return access_denied(e) except ex.SaharaException as e: return bad_request(e) except Exception as e: return internal_error(500, 'Internal Server Error', e) f_rule = "/" + rule self.add_url_rule(rule, endpoint, handler, **options) self.add_url_rule(rule + '.json', endpoint, handler, **options) self.add_url_rule(f_rule, endpoint, handler, **options) self.add_url_rule(f_rule + '.json', endpoint, handler, **options) return func return decorator def check_microversion_header(): requested_version = get_requested_microversion() if not re.match(mv.VERSION_STRING_REGEX, requested_version): bad_request_microversion(requested_version) if requested_version not in mv.API_VERSIONS: not_acceptable_microversion(requested_version) def add_vary_header(response): response.headers[mv.VARY_HEADER] = mv.OPENSTACK_API_VERSION_HEADER response.headers[mv.OPENSTACK_API_VERSION_HEADER] = "{} {}".format( mv.SAHARA_SERVICE_TYPE, get_requested_microversion()) return response class RestV2(Rest): def __init__(self, *args, **kwargs): super(RestV2, self).__init__(*args, **kwargs) self.before_request(check_microversion_header) self.after_request(add_vary_header) def route(self, rule, **options): status = options.pop('status_code', None) file_upload = options.pop('file_upload', False) def decorator(func): endpoint = options.pop('endpoint', func.__name__) def handler(**kwargs): context.set_ctx(None) LOG.debug("Rest.route.decorator.handler, kwargs={kwargs}" .format(kwargs=kwargs)) _init_resp_type(file_upload) # update status code if status: flask.request.status_code = status kwargs.pop("tenant_id", None) req_id = flask.request.environ.get(oslo_req_id.ENV_REQUEST_ID) auth_plugin = flask.request.environ.get('keystone.token_auth') ctx = context.Context( flask.request.headers['X-User-Id'], flask.request.headers['X-Tenant-Id'], flask.request.headers['X-Auth-Token'], flask.request.headers['X-Service-Catalog'], flask.request.headers['X-User-Name'], flask.request.headers['X-Tenant-Name'], flask.request.headers['X-Roles'].split(','), auth_plugin=auth_plugin, request_id=req_id) context.set_ctx(ctx) try: if flask.request.method in ['POST', 'PUT', 'PATCH']: kwargs['data'] = request_data() return func(**kwargs) except ex.Forbidden as e: return access_denied(e) except ex.SaharaException as e: return bad_request(e) except Exception as e: return internal_error(500, 'Internal Server Error', e) f_rule = "/" + rule self.add_url_rule(rule, endpoint, handler, **options) self.add_url_rule(rule + '.json', endpoint, handler, **options) self.add_url_rule(f_rule, endpoint, handler, **options) self.add_url_rule(f_rule + '.json', endpoint, handler, **options) return func return decorator RT_JSON = datastructures.MIMEAccept([("application/json", 1)]) def _init_resp_type(file_upload): """Extracts response content type.""" # get content type from Accept header resp_type = flask.request.accept_mimetypes # url /foo.json if flask.request.path.endswith('.json'): resp_type = RT_JSON flask.request.resp_type = resp_type # set file upload flag flask.request.file_upload = file_upload def render(res=None, resp_type=None, status=None, name=None, **kwargs): if not res and type(res) is not types.Page: res = {} if type(res) is dict: res.update(kwargs) elif type(res) is types.Page: result = {name: [item.to_dict() for item in res]} result.update(kwargs) if res.prev or res.next or ('marker' in get_request_args()): result["markers"] = {"prev": res.prev, "next": res.next} res = result elif kwargs: # can't merge kwargs into the non-dict res abort_and_log(500, _("Non-dict and non-empty kwargs passed to render")) status_code = getattr(flask.request, 'status_code', None) if status: status_code = status if not status_code: status_code = 200 if not resp_type: resp_type = getattr(flask.request, 'resp_type', RT_JSON) if not resp_type: resp_type = RT_JSON serializer = None if "application/json" in resp_type: resp_type = RT_JSON serializer = wsgi.JSONDictSerializer() else: raise ex.InvalidDataException( _("Content type '%s' isn't supported") % resp_type) body = serializer.serialize(res) resp_type = str(resp_type) return flask.Response(response=body, status=status_code, mimetype=resp_type) def request_data(): if hasattr(flask.request, 'parsed_data'): return flask.request.parsed_data if (flask.request.content_length is None or not flask.request.content_length > 0): LOG.debug("Empty body provided in request") return dict() if flask.request.file_upload: return flask.request.data deserializer = None content_type = flask.request.mimetype if not content_type or content_type in RT_JSON: deserializer = wsgi.JSONDeserializer() else: raise ex.InvalidDataException( _("Content type '%s' isn't supported") % content_type) # parsed request data to avoid unwanted re-parsings parsed_data = deserializer.deserialize(flask.request.data)['body'] flask.request.parsed_data = parsed_data return flask.request.parsed_data def get_request_args(): return flask.request.args def get_requested_microversion(): requested_version = microversion_parse.get_version( flask.request.headers, mv.SAHARA_SERVICE_TYPE ) if requested_version is None: requested_version = mv.MIN_API_VERSION elif requested_version == mv.LATEST: requested_version = mv.MAX_API_VERSION return requested_version def abort_and_log(status_code, descr, exc=None): LOG.error("Request aborted with status code {code} and " "message '{message}'".format(code=status_code, message=descr)) if exc is not None: LOG.error(traceback.format_exc()) flask.abort(status_code, description=descr) def render_error_message(error_code, error_message, error_name, **msg_kwargs): message = { "error_code": error_code, "error_message": error_message, "error_name": error_name } message.update(**msg_kwargs) resp = render(message) resp.status_code = error_code return resp def not_acceptable_microversion(requested_version): message = ("Version {} is not supported by the API. " "Minimum is {} and maximum is {}.".format( requested_version, mv.MIN_API_VERSION, mv.MAX_API_VERSION )) resp = render_error_message( mv.NOT_ACCEPTABLE_STATUS_CODE, message, mv.NOT_ACCEPTABLE_STATUS_NAME, max_version=mv.MAX_API_VERSION, min_version=mv.MIN_API_VERSION ) flask.abort(resp) def bad_request_microversion(requested_version): message = ("API Version String {} is of invalid format. Must be of format" " MajorNum.MinorNum.").format(requested_version) resp = render_error_message( mv.BAD_REQUEST_STATUS_CODE, message, mv.BAD_REQUEST_STATUS_NAME, max_version=mv.MAX_API_VERSION, min_version=mv.MIN_API_VERSION ) flask.abort(resp) def invalid_param_error(status_code, descr, exc=None): LOG.error("Request aborted with status code {code} and " "message '{message}'".format(code=status_code, message=descr)) if exc is not None: LOG.error(traceback.format_exc()) error_code = "INVALID_PARAMS_ON_REQUEST" return render_error_message(status_code, descr, error_code) def internal_error(status_code, descr, exc=None): LOG.error("Request aborted with status code {code} and " "message '{message}'".format(code=status_code, message=descr)) if exc is not None: LOG.error(traceback.format_exc()) error_code = "INTERNAL_SERVER_ERROR" if status_code == 501: error_code = "NOT_IMPLEMENTED_ERROR" return render_error_message(status_code, descr, error_code) def bad_request(error): error_code = 400 LOG.error("Validation Error occurred: " "error_code={code}, error_message={message}, " "error_name={name}".format(code=error_code, message=error.message, name=error.code)) return render_error_message(error_code, error.message, error.code) def access_denied(error): error_code = 403 LOG.error("Access Denied: error_code={code}, error_message={message}, " "error_name={name}".format(code=error_code, message=error.message, name=error.code)) return render_error_message(error_code, error.message, error.code) def not_found(error): error_code = 404 LOG.error("Not Found exception occurred: " "error_code={code}, error_message={message}, " "error_name={name}".format(code=error_code, message=error.message, name=error.code)) return render_error_message(error_code, error.message, error.code) def to_wrapped_dict(func, id, *args, **kwargs): return render(to_wrapped_dict_no_render(func, id, *args, **kwargs)) def to_wrapped_dict_no_render(func, id, *args, **kwargs): obj = func(id, *args, **kwargs) if obj is None: e = ex.NotFoundException( {'id': id}, _('Object with %s not found')) return not_found(e) return obj.to_wrapped_dict() def _replace_hadoop_version_plugin_version(obj): dict.update(obj, {'plugin_version': obj['hadoop_version']}) dict.pop(obj, 'hadoop_version') def _replace_tenant_id_project_id(obj): dict.update(obj, {'project_id': obj['tenant_id']}) dict.pop(obj, 'tenant_id') sahara-12.0.0/sahara/utils/patches.py0000664000175000017500000000346213656752032017471 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet EVENTLET_MONKEY_PATCH_MODULES = dict(os=True, select=True, socket=True, thread=True, time=True) def patch_all(): """Apply all patches. List of patches: * eventlet's monkey patch for all cases; """ eventlet_monkey_patch() def eventlet_monkey_patch(): """Apply eventlet's monkey patch. This call should be the first call in application. It's safe to call monkey_patch multiple times. """ eventlet.monkey_patch(**EVENTLET_MONKEY_PATCH_MODULES) # Monkey patch the original current_thread to use the up-to-date _active # global variable. See https://bugs.launchpad.net/bugs/1863021 and # https://github.com/eventlet/eventlet/issues/592 import __original_module_threading as orig_threading import threading # noqa orig_threading.current_thread.__globals__['_active'] = threading._active def eventlet_import_monkey_patched(module): """Returns module monkey patched by eventlet. It's needed for some tests, for example, context test. """ return eventlet.import_patched(module, **EVENTLET_MONKEY_PATCH_MODULES) sahara-12.0.0/sahara/utils/notification/0000775000175000017500000000000013656752227020157 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/notification/__init__.py0000664000175000017500000000000013656752032022250 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/notification/sender.py0000664000175000017500000000574713656752032022020 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from sahara import context from sahara.utils import rpc as messaging LOG = logging.getLogger(__name__) SERVICE = 'sahara' CLUSTER_EVENT_TEMPLATE = "sahara.cluster.%s" HEALTH_EVENT_TYPE = CLUSTER_EVENT_TEMPLATE % "health" notifier_opts = [ cfg.StrOpt('level', default='INFO', deprecated_name='notification_level', deprecated_group='DEFAULT', help='Notification level for outgoing notifications'), cfg.StrOpt('publisher_id', deprecated_name='notification_publisher_id', deprecated_group='DEFAULT', help='Identifier of the publisher') ] notifier_opts_group = 'oslo_messaging_notifications' CONF = cfg.CONF CONF.register_opts(notifier_opts, group=notifier_opts_group) def _get_publisher(): publisher_id = CONF.oslo_messaging_notifications.publisher_id if publisher_id is None: publisher_id = SERVICE return publisher_id def _notify(event_type, body): LOG.debug("Notification about cluster is going to be sent. Notification " "type={type}".format(type=event_type)) ctx = context.ctx() level = CONF.oslo_messaging_notifications.level body.update({'project_id': ctx.tenant_id, 'user_id': ctx.user_id}) client = messaging.get_notifier(_get_publisher()) method = getattr(client, level.lower()) method(ctx, event_type, body) def _health_notification_body(cluster, health_check): verification = cluster.verification return { 'cluster_id': cluster.id, 'cluster_name': cluster.name, 'verification_id': verification['id'], 'health_check_status': health_check['status'], 'health_check_name': health_check['name'], 'health_check_description': health_check['description'], 'created_at': health_check['created_at'], 'updated_at': health_check['updated_at'] } def status_notify(cluster_id, cluster_name, cluster_status, ev_type): """Sends notification about creating/updating/deleting cluster.""" _notify(CLUSTER_EVENT_TEMPLATE % ev_type, { 'cluster_id': cluster_id, 'cluster_name': cluster_name, 'cluster_status': cluster_status}) def health_notify(cluster, health_check): """Sends notification about current cluster health.""" _notify(HEALTH_EVENT_TYPE, _health_notification_body(cluster, health_check)) sahara-12.0.0/sahara/utils/proxy.py0000664000175000017500000002634313656752032017226 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from oslo_utils import uuidutils import six from sahara import conductor as c from sahara import context from sahara import exceptions as ex from sahara.i18n import _ from sahara.service.castellan import utils as key_manager from sahara.service.edp import job_utils from sahara.service import trusts as t from sahara.swift import utils as su from sahara.utils.openstack import base as b from sahara.utils.openstack import keystone as k PROXY_DOMAIN = None conductor = c.API LOG = logging.getLogger(__name__) CONF = cfg.CONF opts = [ cfg.BoolOpt('use_domain_for_proxy_users', default=False, help='Enables Sahara to use a domain for creating temporary ' 'proxy users to access Swift. If this is enabled ' 'a domain must be created for Sahara to use.'), cfg.StrOpt('proxy_user_domain_name', default=None, help='The domain Sahara will use to create new proxy users ' 'for Swift object access.'), cfg.ListOpt('proxy_user_role_names', default=['member'], help='A list of the role names that the proxy user should ' 'assume through trust for Swift object access.') ] CONF.register_opts(opts) def create_proxy_user_for_job_execution(job_execution): '''Creates a proxy user and adds the credentials to the job execution :param job_execution: The job execution model to update ''' username = 'job_{0}'.format(job_execution.id) password = key_manager.store_secret(proxy_user_create(username)) current_user = k.auth() proxy_user = k.auth_for_proxy(username, password) trust_id = t.create_trust(trustor=current_user, trustee=proxy_user, role_names=CONF.proxy_user_role_names) update = {'job_configs': job_execution.job_configs.to_dict()} update['job_configs']['proxy_configs'] = { 'proxy_username': username, 'proxy_password': password, 'proxy_trust_id': trust_id } conductor.job_execution_update(context.ctx(), job_execution, update) def delete_proxy_user_for_job_execution(job_execution): '''Delete a proxy user based on a JobExecution :param job_execution: The job execution with proxy user information :returns: An updated job_configs dictionary or None ''' proxy_configs = job_execution.job_configs.get('proxy_configs') if proxy_configs is not None: proxy_username = proxy_configs.get('proxy_username') proxy_trust_id = proxy_configs.get('proxy_trust_id') proxy_user = k.auth_for_proxy(proxy_username, key_manager.get_secret( proxy_configs.get('proxy_password')), proxy_trust_id) t.delete_trust(proxy_user, proxy_trust_id) proxy_user_delete(proxy_username) key_manager.delete_secret(proxy_configs.get('proxy_password')) update = job_execution.job_configs.to_dict() del update['proxy_configs'] return update return None def create_proxy_user_for_cluster(cluster): '''Creates a proxy user and adds the credentials to the cluster :param cluster: The cluster model to update ''' if cluster.cluster_configs.get('proxy_configs'): return cluster username = 'cluster_{0}'.format(cluster.id) password = key_manager.store_secret(proxy_user_create(username)) current_user = k.auth() proxy_user = k.auth_for_proxy(username, password) trust_id = t.create_trust(trustor=current_user, trustee=proxy_user, role_names=CONF.proxy_user_role_names) update = {'cluster_configs': cluster.cluster_configs.to_dict()} update['cluster_configs']['proxy_configs'] = { 'proxy_username': username, 'proxy_password': password, 'proxy_trust_id': trust_id } return conductor.cluster_update(context.ctx(), cluster, update) def delete_proxy_user_for_cluster(cluster): '''Delete a proxy user based on a Cluster :param cluster: The cluster model with proxy user information ''' proxy_configs = cluster.cluster_configs.get('proxy_configs') if proxy_configs is not None: proxy_username = proxy_configs.get('proxy_username') proxy_trust_id = proxy_configs.get('proxy_trust_id') proxy_user = k.auth_for_proxy(proxy_username, key_manager.get_secret( proxy_configs.get('proxy_password')), proxy_trust_id) t.delete_trust(proxy_user, proxy_trust_id) proxy_user_delete(proxy_username) key_manager.delete_secret(proxy_configs.get('proxy_password')) update = {'cluster_configs': cluster.cluster_configs.to_dict()} del update['cluster_configs']['proxy_configs'] conductor.cluster_update(context.ctx(), cluster, update) def domain_for_proxy(): '''Return the proxy domain or None If configured to use the proxy domain, this function will return that domain. If not configured to use the proxy domain, this function will return None. If the proxy domain can't be found this will raise an exception. :returns: A Keystone Domain object or None. :raises ConfigurationError: If the domain is requested but not specified. :raises NotFoundException: If the domain name is specified but cannot be found. ''' if CONF.use_domain_for_proxy_users is False: return None if CONF.proxy_user_domain_name is None: raise ex.ConfigurationError(_('Proxy domain requested but not ' 'specified.')) admin = k.client_for_admin() global PROXY_DOMAIN if not PROXY_DOMAIN: domain_list = b.execute_with_retries( admin.domains.list, name=CONF.proxy_user_domain_name) if len(domain_list) == 0: raise ex.NotFoundException( value=CONF.proxy_user_domain_name, message_template=_('Failed to find domain %s')) # the domain name should be globally unique in Keystone if len(domain_list) > 1: raise ex.NotFoundException( value=CONF.proxy_user_domain_name, message_template=_('Unexpected results found when searching ' 'for domain %s')) PROXY_DOMAIN = domain_list[0] return PROXY_DOMAIN def job_execution_requires_proxy_user(job_execution): '''Returns True if the job execution requires a proxy user.''' def _check_values(values): return any(value.startswith( su.SWIFT_INTERNAL_PREFIX) for value in values if ( isinstance(value, six.string_types))) if CONF.use_domain_for_proxy_users is False: return False paths = [conductor.data_source_get(context.ctx(), job_execution.output_id), conductor.data_source_get(context.ctx(), job_execution.input_id)] if _check_values(ds.url for ds in paths if ds): return True if _check_values(six.itervalues( job_execution.job_configs.get('configs', {}))): return True if _check_values(six.itervalues( job_execution.job_configs.get('params', {}))): return True if _check_values(job_execution.job_configs.get('args', [])): return True job = conductor.job_get(context.ctx(), job_execution.job_id) if _check_values(main.url for main in job.mains): return True if _check_values(lib.url for lib in job.libs): return True # We did the simple checks, now if data_source referencing is # enabled and we have values that could be a name or uuid, # query for data_sources that match and contain a swift path by_name, by_uuid = job_utils.may_contain_data_source_refs( job_execution.job_configs) if by_name: names = tuple(job_utils.find_possible_data_source_refs_by_name( job_execution.job_configs)) # do a query here for name in names and path starts with swift-prefix if names and conductor.data_source_count( context.ctx(), name=names, url=su.SWIFT_INTERNAL_PREFIX+'%') > 0: return True if by_uuid: uuids = tuple(job_utils.find_possible_data_source_refs_by_uuid( job_execution.job_configs)) # do a query here for id in uuids and path starts with swift-prefix if uuids and conductor.data_source_count( context.ctx(), id=uuids, url=su.SWIFT_INTERNAL_PREFIX+'%') > 0: return True return False def proxy_domain_users_list(): '''Return a list of all users in the proxy domain.''' admin = k.client_for_admin() domain = domain_for_proxy() if domain: return b.execute_with_retries(admin.users.list, domain=domain.id) return [] def proxy_user_create(username): '''Create a new user in the proxy domain Creates the username specified with a random password. :param username: The name of the new user. :returns: The password created for the user. ''' admin = k.client_for_admin() domain = domain_for_proxy() password = uuidutils.generate_uuid() b.execute_with_retries( admin.users.create, name=username, password=password, domain=domain.id) LOG.debug('Created proxy user {username}'.format(username=username)) return password def proxy_user_delete(username=None, user_id=None): '''Delete the user from the proxy domain. :param username: The name of the user to delete. :param user_id: The id of the user to delete, if provided this overrides the username. :raises NotFoundException: If there is an error locating the user in the proxy domain. ''' admin = k.client_for_admin() if not user_id: domain = domain_for_proxy() user_list = b.execute_with_retries( admin.users.list, domain=domain.id, name=username) if len(user_list) == 0: raise ex.NotFoundException( value=username, message_template=_('Failed to find user %s')) if len(user_list) > 1: raise ex.NotFoundException( value=username, message_template=_('Unexpected results found when searching ' 'for user %s')) user_id = user_list[0].id b.execute_with_retries(admin.users.delete, user_id) LOG.debug('Deleted proxy user id {user_id}'.format(user_id=user_id)) sahara-12.0.0/sahara/utils/ssh_remote.py0000664000175000017500000010660313656752032020213 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # Copyright (c) 2013 Hortonworks, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Helper methods for executing commands on nodes via SSH. The main access point is method get_remote(instance), it returns InstanceInteropHelper object which does the actual work. See the class for the list of available methods. It is a context manager, so it could be used with 'with' statement like that: with get_remote(instance) as r: r.execute_command(...) Note that the module offloads the ssh calls to a child process. It was implemented that way because we found no way to run paramiko and eventlet together. The private high-level module methods are implementations which are run in a separate process. """ import copy import os import shlex import sys import threading import time from eventlet.green import subprocess as e_subprocess from eventlet import semaphore from eventlet import timeout as e_timeout from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import uuidutils import paramiko import requests from requests import adapters import six from sahara import context from sahara import exceptions as ex from sahara.i18n import _ from sahara.service import trusts from sahara.utils import crypto from sahara.utils import network as net_utils from sahara.utils.openstack import neutron from sahara.utils import procutils from sahara.utils import remote LOG = logging.getLogger(__name__) CONF = cfg.CONF ssh_config_options = [ cfg.IntOpt( 'ssh_timeout_common', default=300, min=1, help="Overrides timeout for common ssh operations, in seconds"), cfg.IntOpt( 'ssh_timeout_interactive', default=1800, min=1, help="Overrides timeout for interactive ssh operations, in seconds"), cfg.IntOpt( 'ssh_timeout_files', default=600, min=1, help="Overrides timeout for ssh operations with files, in seconds"), ] CONF.register_opts(ssh_config_options) _ssh = None _proxy_ssh = None _sessions = {} INFRA = None SSH_TIMEOUTS_MAPPING = { '_execute_command': 'ssh_timeout_common', '_execute_command_interactive': 'ssh_timeout_interactive' } _global_remote_semaphore = None def _get_access_ip(instance): if CONF.proxy_command and CONF.proxy_command_use_internal_ip: return instance.internal_ip return instance.management_ip def _default_timeout(func): timeout = SSH_TIMEOUTS_MAPPING.get(func.__name__, 'ssh_timeout_files') return getattr(CONF, timeout, CONF.ssh_timeout_common) def _get_ssh_timeout(func, timeout): return timeout if timeout else _default_timeout(func) def _connect(host, username, private_key, proxy_command=None, gateway_host=None, gateway_image_username=None): global _ssh global _proxy_ssh LOG.debug('Creating SSH connection') if isinstance(private_key, six.string_types): private_key = crypto.to_paramiko_private_key(private_key) _ssh = paramiko.SSHClient() _ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) proxy = None if proxy_command: LOG.debug('Creating proxy using command: {command}'.format( command=proxy_command)) proxy = paramiko.ProxyCommand(proxy_command) if gateway_host: _proxy_ssh = paramiko.SSHClient() _proxy_ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) LOG.debug('Connecting to proxy gateway at: {gateway}'.format( gateway=gateway_host)) _proxy_ssh.connect(gateway_host, username=gateway_image_username, pkey=private_key, sock=proxy) proxy = _proxy_ssh.get_transport().open_session() proxy.exec_command("nc {0} 22".format(host)) _ssh.connect(host, username=username, pkey=private_key, sock=proxy) def _cleanup(): global _ssh global _proxy_ssh _ssh.close() if _proxy_ssh: _proxy_ssh.close() def _read_paramimko_stream(recv_func): result = b'' buf = recv_func(1024) while buf != b'': result += buf buf = recv_func(1024) return result def _escape_quotes(command): command = command.replace('\\', '\\\\') command = command.replace('"', '\\"') command = command.replace('`', '\\`') return command def _execute_command(cmd, run_as_root=False, get_stderr=False, raise_when_error=True): global _ssh chan = _ssh.get_transport().open_session() if run_as_root: chan.exec_command('sudo bash -c "%s"' % _escape_quotes(cmd)) else: chan.exec_command(cmd) # TODO(dmitryme): that could hang if stderr buffer overflows stdout = _read_paramimko_stream(chan.recv) stderr = _read_paramimko_stream(chan.recv_stderr) if type(stdout) == bytes: stdout = stdout.decode('utf-8') if type(stderr) == bytes: stderr = stderr.decode('utf-8') ret_code = chan.recv_exit_status() if ret_code and raise_when_error: raise ex.RemoteCommandException(cmd=cmd, ret_code=ret_code, stdout=stdout, stderr=stderr) if get_stderr: return ret_code, stdout, stderr else: return ret_code, stdout def _execute_command_interactive(cmd, run_as_root=False): global _ssh chan = _ssh.get_transport().open_session() if run_as_root: chan.exec_command('sudo bash -c "%s"' % _escape_quotes(cmd)) else: chan.exec_command(cmd) _proxy_shell(chan) _ssh.close() def _proxy_shell(chan): def readall(): while True: d = sys.stdin.read(1) if not d or chan.exit_status_ready(): break chan.send(d) reader = threading.Thread(target=readall) reader.start() while True: data = chan.recv(256) if not data or chan.exit_status_ready(): break sys.stdout.write(data) sys.stdout.flush() def _get_http_client(host, port, proxy_command=None, gateway_host=None, gateway_username=None, gateway_private_key=None): global _sessions _http_session = _sessions.get((host, port), None) LOG.debug('Cached HTTP session for {host}:{port} is {session}'.format( host=host, port=port, session=_http_session)) if not _http_session: if gateway_host: _http_session = _get_proxy_gateway_http_session( gateway_host, gateway_username, gateway_private_key, host, port, proxy_command) LOG.debug('Created ssh proxied HTTP session for {host}:{port}' .format(host=host, port=port)) elif proxy_command: # can return a new session here because it actually uses # the same adapter (and same connection pools) for a given # host and port tuple _http_session = _get_proxied_http_session( proxy_command, host, port=port) LOG.debug('Created proxied HTTP session for {host}:{port}' .format(host=host, port=port)) else: # need to cache the sessions that are not proxied through # HTTPRemoteWrapper so that a new session with a new HTTPAdapter # and associated pools is not recreated for each HTTP invocation _http_session = requests.Session() LOG.debug('Created standard HTTP session for {host}:{port}' .format(host=host, port=port)) adapter = requests.adapters.HTTPAdapter() for prefix in ['http://', 'https://']: _http_session.mount(prefix + '%s:%s' % (host, port), adapter) LOG.debug('Caching session {session} for {host}:{port}' .format(session=_http_session, host=host, port=port)) _sessions[(host, port)] = _http_session return _http_session def _write_fl(sftp, remote_file, data): try: write_data = paramiko.py3compat.StringIO(data) except TypeError: write_data = paramiko.py3compat.BytesIO(data) sftp.putfo(write_data, remote_file) def _append_fl(sftp, remote_file, data): fl = sftp.file(remote_file, 'a') fl.write(data) fl.close() def _write_file(sftp, remote_file, data, run_as_root): if run_as_root: temp_file = 'temp-file-%s' % uuidutils.generate_uuid() _write_fl(sftp, temp_file, data) _execute_command( 'mv %s %s' % (temp_file, remote_file), run_as_root=True) else: _write_fl(sftp, remote_file, data) def _append_file(sftp, remote_file, data, run_as_root): if run_as_root: temp_file = 'temp-file-%s' % uuidutils.generate_uuid() _write_fl(sftp, temp_file, data) _execute_command( 'cat %s >> %s' % (temp_file, remote_file), run_as_root=True) _execute_command('rm -f %s' % temp_file) else: _append_fl(sftp, remote_file, data) def _prepend_file(sftp, remote_file, data, run_as_root): if run_as_root: temp_file = 'temp-file-%s' % uuidutils.generate_uuid() temp_remote_file = 'temp-remote-file-%s' % uuidutils.generate_uuid() _write_fl(sftp, temp_file, data) _execute_command( 'cat %s > %s' % (remote_file, temp_remote_file)) _execute_command( 'cat %s %s > %s' % ( temp_file, temp_remote_file, remote_file), run_as_root=True) _execute_command('rm -f %s %s' % (temp_file, temp_remote_file)) def _write_file_to(remote_file, data, run_as_root=False): global _ssh _write_file(_ssh.open_sftp(), remote_file, data, run_as_root) def _write_files_to(files, run_as_root=False): global _ssh sftp = _ssh.open_sftp() for fl, data in six.iteritems(files): _write_file(sftp, fl, data, run_as_root) def _append_to_file(remote_file, data, run_as_root=False): global _ssh _append_file(_ssh.open_sftp(), remote_file, data, run_as_root) def _append_to_files(files, run_as_root=False): global _ssh sftp = _ssh.open_sftp() for fl, data in six.iteritems(files): _append_file(sftp, fl, data, run_as_root) def _prepend_to_file(remote_file, data, run_as_root=False): global _ssh _prepend_file(_ssh.open_sftp(), remote_file, data, run_as_root) def _prepend_to_files(files, run_as_root=False): global _ssh sftp = _ssh.open_sftp() for fl, data in six.iteritems(files): _prepend_file(sftp, fl, data, run_as_root) def _read_file(sftp, remote_file): fl = sftp.file(remote_file, 'r') data = fl.read() fl.close() try: return data.decode('utf-8') except Exception: return data def _read_file_from(remote_file, run_as_root=False): global _ssh fl = remote_file if run_as_root: fl = 'temp-file-%s' % (uuidutils.generate_uuid()) _execute_command('cp %s %s' % (remote_file, fl), run_as_root=True) try: return _read_file(_ssh.open_sftp(), fl) except IOError: LOG.error("Can't read file {filename}".format(filename=remote_file)) raise finally: if run_as_root: _execute_command( 'rm %s' % fl, run_as_root=True, raise_when_error=False) def _get_python_to_execute(): try: _execute_command('python3 --version') except Exception: _execute_command('python2 --version') return 'python2' return 'python3' def _get_os_distrib(): python_version = _get_python_to_execute() return _execute_command( ('printf "import platform\nprint(platform.linux_distribution(' 'full_distribution_name=0)[0])" | {}'.format(python_version)), run_as_root=False)[1].lower().strip() def _get_os_version(): python_version = _get_python_to_execute() return _execute_command( ('printf "import platform\nprint(platform.linux_distribution()[1])"' ' | {}'.format(python_version)), run_as_root=False)[1].strip() def _install_packages(packages): distrib = _get_os_distrib() if distrib == 'ubuntu': cmd = 'RUNLEVEL=1 apt-get install -y %(pkgs)s' elif distrib == 'fedora': fversion = _get_os_version() if fversion >= 22: cmd = 'dnf install -y %(pkgs)s' else: cmd = 'yum install -y %(pkgs)s' elif distrib in ('redhat', 'centos'): cmd = 'yum install -y %(pkgs)s' else: raise ex.NotImplementedException( _('Package Installation'), _('%(fmt)s is not implemented for OS %(distrib)s') % { 'fmt': '%s', 'distrib': distrib}) cmd = cmd % {'pkgs': ' '.join(packages)} _execute_command(cmd, run_as_root=True) def _update_repository(): distrib = _get_os_distrib() if distrib == 'ubuntu': cmd = 'apt-get update' elif distrib == 'fedora': fversion = _get_os_version() if fversion >= 22: cmd = 'dnf clean all' else: cmd = 'yum clean all' elif distrib in ('redhat', 'centos'): cmd = 'yum clean all' else: raise ex.NotImplementedException( _('Repository Update'), _('%(fmt)s is not implemented for OS %(distrib)s') % { 'fmt': '%s', 'distrib': distrib}) _execute_command(cmd, run_as_root=True) def _replace_remote_string(remote_file, old_str, new_str): old_str = old_str.replace("\'", "\''") new_str = new_str.replace("\'", "\''") cmd = "sudo sed -i 's,%s,%s,g' %s" % (old_str, new_str, remote_file) _execute_command(cmd) def _replace_remote_line(remote_file, old_line_with_start_string, new_line): search_string = old_line_with_start_string.replace("\'", "\''") cmd = ("sudo sed -i 's/^%s.*/%s/' %s" % (search_string, new_line, remote_file)) _execute_command(cmd) def _execute_on_vm_interactive(cmd, matcher): global _ssh buf = '' channel = _ssh.invoke_shell() LOG.debug('Channel is {channel}'.format(channel=channel)) try: LOG.debug('Sending cmd {command}'.format(command=cmd)) channel.send(cmd + '\n') while not matcher.is_eof(buf): buf += channel.recv(4096) response = matcher.get_response(buf) if response is not None: channel.send(response + '\n') buf = '' finally: LOG.debug('Closing channel') channel.close() def _acquire_remote_semaphore(): context.current().remote_semaphore.acquire() _global_remote_semaphore.acquire() def _release_remote_semaphore(): _global_remote_semaphore.release() context.current().remote_semaphore.release() def _get_proxied_http_session(proxy_command, host, port=None): session = requests.Session() adapter = ProxiedHTTPAdapter( _simple_exec_func(shlex.split(proxy_command)), host, port) session.mount('http://{0}:{1}'.format(host, adapter.port), adapter) return session def _get_proxy_gateway_http_session(gateway_host, gateway_username, gateway_private_key, host, port=None, proxy_command=None): session = requests.Session() adapter = ProxiedHTTPAdapter( _proxy_gateway_func(gateway_host, gateway_username, gateway_private_key, host, port, proxy_command), host, port) session.mount('http://{0}:{1}'.format(host, port), adapter) return session def _simple_exec_func(cmd): def func(): return e_subprocess.Popen(cmd, stdin=e_subprocess.PIPE, stdout=e_subprocess.PIPE, stderr=e_subprocess.PIPE) return func def _proxy_gateway_func(gateway_host, gateway_username, gateway_private_key, host, port, proxy_command): def func(): proc = procutils.start_subprocess() try: conn_params = (gateway_host, gateway_username, gateway_private_key, proxy_command, None, None) procutils.run_in_subprocess(proc, _connect, conn_params) cmd = "nc {host} {port}".format(host=host, port=port) procutils.run_in_subprocess( proc, _execute_command_interactive, (cmd,), interactive=True) return proc except Exception: with excutils.save_and_reraise_exception(): procutils.shutdown_subprocess(proc, _cleanup) return func class ProxiedHTTPAdapter(adapters.HTTPAdapter): def __init__(self, create_process_func, host, port): super(ProxiedHTTPAdapter, self).__init__() LOG.debug('HTTP adapter created for {host}:{port}'.format(host=host, port=port)) self.create_process_func = create_process_func self.port = port self.host = host def get_connection(self, url, proxies=None): pool_conn = ( super(ProxiedHTTPAdapter, self).get_connection(url, proxies)) if hasattr(pool_conn, '_get_conn'): http_conn = pool_conn._get_conn() if http_conn.sock is None: if hasattr(http_conn, 'connect'): sock = self._connect() LOG.debug('HTTP connection {connection} getting new ' 'netcat socket {socket}'.format( connection=http_conn, socket=sock)) http_conn.sock = sock else: if hasattr(http_conn.sock, 'is_netcat_socket'): LOG.debug('Pooled http connection has existing ' 'netcat socket. resetting pipe') http_conn.sock.reset() pool_conn._put_conn(http_conn) return pool_conn def close(self): LOG.debug('Closing HTTP adapter for {host}:{port}' .format(host=self.host, port=self.port)) super(ProxiedHTTPAdapter, self).close() def _connect(self): LOG.debug('Returning netcat socket for {host}:{port}' .format(host=self.host, port=self.port)) rootwrap_command = CONF.rootwrap_command if CONF.use_rootwrap else '' return NetcatSocket(self.create_process_func, rootwrap_command) class NetcatSocket(object): def _create_process(self): self.process = self.create_process_func() def __init__(self, create_process_func, rootwrap_command=None): self.create_process_func = create_process_func self.rootwrap_command = rootwrap_command self._create_process() def send(self, content): try: self.process.stdin.write(content) self.process.stdin.flush() except IOError as e: raise ex.SystemError(e) return len(content) def sendall(self, content): return self.send(content) def makefile(self, mode, *arg): if mode.startswith('r'): return self.process.stdout if mode.startswith('w'): return self.process.stdin raise ex.IncorrectStateError(_("Unknown file mode %s") % mode) def recv(self, size): try: return os.read(self.process.stdout.fileno(), size) except IOError as e: raise ex.SystemError(e) def _terminate(self): if self.rootwrap_command: os.system('{0} kill {1}'.format(self.rootwrap_command, # nosec self.process.pid)) else: self.process.terminate() def close(self): LOG.debug('Socket close called') self._terminate() def settimeout(self, timeout): pass def fileno(self): return self.process.stdin.fileno() def is_netcat_socket(self): return True def reset(self): self._terminate() self._create_process() class InstanceInteropHelper(remote.Remote): def __init__(self, instance): self.instance = instance def __enter__(self): _acquire_remote_semaphore() try: self.bulk = BulkInstanceInteropHelper(self.instance) return self.bulk except Exception: with excutils.save_and_reraise_exception(): _release_remote_semaphore() def __exit__(self, *exc_info): try: self.bulk.close() finally: _release_remote_semaphore() def get_neutron_info(self, instance=None): if not instance: instance = self.instance neutron_info = dict() neutron_info['network'] = instance.cluster.neutron_management_network ctx = context.current() neutron_info['token'] = context.get_auth_token() neutron_info['tenant'] = ctx.tenant_name neutron_info['host'] = _get_access_ip(instance) log_info = copy.deepcopy(neutron_info) del log_info['token'] LOG.debug('Returning neutron info: {info}'.format(info=log_info)) return neutron_info def _build_proxy_command(self, command, instance=None, port=None, info=None, rootwrap_command=None): # Accepted keywords in the proxy command template: # {host}, {port}, {tenant_id}, {network_id}, {router_id} keywords = {} if not info: info = self.get_neutron_info(instance) keywords['tenant_id'] = context.current().tenant_id keywords['network_id'] = info['network'] # Query Neutron only if needed if '{router_id}' in command: auth = trusts.get_os_admin_auth_plugin(instance.cluster) client = neutron.NeutronClient(info['network'], info['token'], info['tenant'], auth=auth) keywords['router_id'] = client.get_router() keywords['host'] = _get_access_ip(instance) keywords['port'] = port try: command = command.format(**keywords) except KeyError as e: LOG.error('Invalid keyword in proxy_command: {result}'.format( result=e)) # Do not give more details to the end-user raise ex.SystemError('Misconfiguration') if rootwrap_command: command = '{0} {1}'.format(rootwrap_command, command) return command def _get_conn_params(self): host_ng = self.instance.node_group cluster = host_ng.cluster access_instance = self.instance proxy_gateway_node = cluster.get_proxy_gateway_node() gateway_host = None gateway_image_username = None if proxy_gateway_node and not host_ng.is_proxy_gateway: # tmckay-fp in other words, if we are going to connect # through the proxy instead of the node we are actually # trying to reach # okay, the node group that supplies the proxy gateway # must have fps, but if a proxy is used the other # nodes are not required to have an fp. # so, this instance is assumed not to have a floating # ip and we are going to get to it through the proxy access_instance = proxy_gateway_node gateway_host = proxy_gateway_node.management_ip ng = proxy_gateway_node.node_group gateway_image_username = ng.image_username proxy_command = None if CONF.proxy_command: # Build a session through a user-defined socket proxy_command = CONF.proxy_command # tmckay-fp we have the node_group for the instance right here # okay, this test here whether access_instance.management_ip is an # fp -- just compare to internal? # in the neutron case, we check the node group for the # access_instance and look for fp elif CONF.use_namespaces and not net_utils.has_floating_ip( access_instance): # Build a session through a netcat socket in the Neutron namespace proxy_command = ( 'ip netns exec qrouter-{router_id} nc {host} {port}') # proxy_command is currently a template, turn it into a real command # i.e. dereference {host}, {port}, etc. if proxy_command: rootwrap = CONF.rootwrap_command if CONF.use_rootwrap else '' proxy_command = self._build_proxy_command( proxy_command, instance=access_instance, port=22, info=None, rootwrap_command=rootwrap) host_ip = _get_access_ip(self.instance) return (host_ip, host_ng.image_username, cluster.management_private_key, proxy_command, gateway_host, gateway_image_username) def _run(self, func, *args, **kwargs): proc = procutils.start_subprocess() try: procutils.run_in_subprocess(proc, _connect, self._get_conn_params()) return procutils.run_in_subprocess(proc, func, args, kwargs) except Exception: with excutils.save_and_reraise_exception(): procutils.shutdown_subprocess(proc, _cleanup) finally: procutils.shutdown_subprocess(proc, _cleanup) def _run_with_log(self, func, timeout, description, *args, **kwargs): start_time = time.time() try: with e_timeout.Timeout(timeout, ex.TimeoutException(timeout, op_name=description)): return self._run(func, *args, **kwargs) finally: self._log_command('"%s" took %.1f seconds to complete' % ( description, time.time() - start_time)) def _run_s(self, func, timeout, description, *args, **kwargs): timeout = _get_ssh_timeout(func, timeout) _acquire_remote_semaphore() try: return self._run_with_log(func, timeout, description, *args, **kwargs) finally: _release_remote_semaphore() def get_http_client(self, port, info=None): self._log_command('Retrieving HTTP session for {0}:{1}'.format( _get_access_ip(self.instance), port)) host_ng = self.instance.node_group cluster = host_ng.cluster access_instance = self.instance access_port = port proxy_gateway_node = cluster.get_proxy_gateway_node() gateway_host = None gateway_username = None gateway_private_key = None if proxy_gateway_node and not host_ng.is_proxy_gateway: access_instance = proxy_gateway_node access_port = 22 gateway_host = proxy_gateway_node.management_ip gateway_username = proxy_gateway_node.node_group.image_username gateway_private_key = cluster.management_private_key proxy_command = None if CONF.proxy_command: # Build a session through a user-defined socket proxy_command = CONF.proxy_command # tmckay-fp again we can check the node group for the instance # what are the implications for nova here? None. # This is a test on whether access_instance has a floating_ip # in the neutron case, we check the node group for the # access_instance and look for fp elif (CONF.use_namespaces and not net_utils.has_floating_ip( access_instance)): # need neutron info if not info: info = self.get_neutron_info(access_instance) # Build a session through a netcat socket in the Neutron namespace proxy_command = ( 'ip netns exec qrouter-{router_id} nc {host} {port}') # proxy_command is currently a template, turn it into a real command # i.e. dereference {host}, {port}, etc. if proxy_command: rootwrap = CONF.rootwrap_command if CONF.use_rootwrap else '' proxy_command = self._build_proxy_command( proxy_command, instance=access_instance, port=access_port, info=info, rootwrap_command=rootwrap) return _get_http_client(_get_access_ip(self.instance), port, proxy_command, gateway_host, gateway_username, gateway_private_key) def close_http_session(self, port): global _sessions host = _get_access_ip(self.instance) self._log_command(_("Closing HTTP session for %(host)s:%(port)s") % { 'host': host, 'port': port}) session = _sessions.get((host, port), None) if session is None: raise ex.NotFoundException( {'host': host, 'port': port}, _('Session for %(host)s:%(port)s not cached')) session.close() del _sessions[(host, port)] def execute_command(self, cmd, run_as_root=False, get_stderr=False, raise_when_error=True, timeout=None): description = _('Executing "%s"') % cmd self._log_command(description) return self._run_s(_execute_command, timeout, description, cmd, run_as_root, get_stderr, raise_when_error) def write_file_to(self, remote_file, data, run_as_root=False, timeout=None): description = _('Writing file "%s"') % remote_file self._log_command(description) self._run_s(_write_file_to, timeout, description, remote_file, data, run_as_root) def write_files_to(self, files, run_as_root=False, timeout=None): description = _('Writing files "%s"') % list(files) self._log_command(description) self._run_s(_write_files_to, timeout, description, files, run_as_root) def append_to_file(self, r_file, data, run_as_root=False, timeout=None): description = _('Appending to file "%s"') % r_file self._log_command(description) self._run_s(_append_to_file, timeout, description, r_file, data, run_as_root) def append_to_files(self, files, run_as_root=False, timeout=None): description = _('Appending to files "%s"') % list(files) self._log_command(description) self._run_s(_append_to_files, timeout, description, files, run_as_root) def prepend_to_file(self, r_file, data, run_as_root=False, timeout=None): description = _('Prepending to file "%s"') % r_file self._log_command(description) self._run_s(_prepend_to_file, timeout, description, r_file, data, run_as_root) def read_file_from(self, remote_file, run_as_root=False, timeout=None): description = _('Reading file "%s"') % remote_file self._log_command(description) return self._run_s(_read_file_from, timeout, description, remote_file, run_as_root) def get_python_version(self, timeout=None): return self._run_s( _get_python_to_execute, timeout, "get_python_version") def get_os_distrib(self, timeout=None): return self._run_s(_get_os_distrib, timeout, "get_os_distrib") def get_os_version(self, timeout=None): return self._run_s(_get_os_version, timeout, "get_os_version") def install_packages(self, packages, timeout=None): description = _('Installing packages "%s"') % list(packages) self._log_command(description) self._run_s(_install_packages, timeout, description, packages) def update_repository(self, timeout=None): description = _('Updating repository') self._log_command(description) self._run_s(_update_repository, timeout, description) def replace_remote_string(self, remote_file, old_str, new_str, timeout=None): description = _('In file "%(file)s" replacing string ' '"%(old_string)s" with "%(new_string)s"') % { "file": remote_file, "old_string": old_str, "new_string": new_str} self._log_command(description) self._run_s(_replace_remote_string, timeout, description, remote_file, old_str, new_str) def replace_remote_line(self, remote_file, old_line_with_start_string, new_line, timeout=None): description = _('In file "%(file)s" replacing line' ' begining with string ' '"%(old_line_with_start_string)s"' ' with "%(new_line)s"') % { "file": remote_file, "old_line_with_start_string": old_line_with_start_string, "new_line": new_line} self._log_command(description) self._run_s(_replace_remote_line, timeout, description, remote_file, old_line_with_start_string, new_line) def execute_on_vm_interactive(self, cmd, matcher, timeout=None): """Runs given command and responds to prompts. 'cmd' is a command to execute. 'matcher' is an object which provides responses on command's prompts. It should have two methods implemented: * get_response(buf) - returns response on prompt if it is found in 'buf' string, which is a part of command output. If no prompt is found, the method should return None. * is_eof(buf) - returns True if current 'buf' indicates that the command is finished. False should be returned otherwise. """ description = _('Executing interactively "%s"') % cmd self._log_command(description) self._run_s(_execute_on_vm_interactive, timeout, description, cmd, matcher) def _log_command(self, str): with context.set_current_instance_id(self.instance.instance_id): LOG.debug(str) class BulkInstanceInteropHelper(InstanceInteropHelper): def __init__(self, instance): super(BulkInstanceInteropHelper, self).__init__(instance) self.proc = procutils.start_subprocess() try: procutils.run_in_subprocess(self.proc, _connect, self._get_conn_params()) except Exception: with excutils.save_and_reraise_exception(): procutils.shutdown_subprocess(self.proc, _cleanup) def close(self): procutils.shutdown_subprocess(self.proc, _cleanup) def _run(self, func, *args, **kwargs): return procutils.run_in_subprocess(self.proc, func, args, kwargs) def _run_s(self, func, timeout, description, *args, **kwargs): timeout = _get_ssh_timeout(func, timeout) return self._run_with_log(func, timeout, description, *args, **kwargs) class SshRemoteDriver(remote.RemoteDriver): def get_type_and_version(self): return "ssh.1.0" def setup_remote(self, engine): global _global_remote_semaphore global INFRA _global_remote_semaphore = semaphore.Semaphore( CONF.global_remote_threshold) INFRA = engine def get_remote(self, instance): return InstanceInteropHelper(instance) def get_userdata_template(self): # SSH does not need any instance customization return "" sahara-12.0.0/sahara/utils/files.py0000664000175000017500000000224613656752032017143 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from os import path import pkg_resources as pkg from sahara import version def get_file_text(file_name, package='sahara'): full_name = pkg.resource_filename( package, file_name) return open(full_name).read() def get_file_binary(file_name): full_name = pkg.resource_filename( version.version_info.package, file_name) return open(full_name, "rb").read() def try_get_file_text(file_name, package='sahara'): full_name = pkg.resource_filename( package, file_name) return ( open(full_name, "rb").read() if path.isfile(full_name) else False) sahara-12.0.0/sahara/utils/cluster_progress_ops.py0000664000175000017500000001352313656752032022327 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import functools from oslo_config import cfg from oslo_utils import excutils from oslo_utils import timeutils import six from sahara import conductor as c from sahara.conductor import resource from sahara import context from sahara.utils import cluster as cluster_utils conductor = c.API CONF = cfg.CONF event_log_opts = [ cfg.BoolOpt('disable_event_log', default=False, help="Disables event log feature.") ] CONF.register_opts(event_log_opts) def add_successful_event(instance): if CONF.disable_event_log: return cluster_id = instance.cluster_id step_id = get_current_provisioning_step(cluster_id) if step_id: conductor.cluster_event_add(context.ctx(), step_id, { 'successful': True, 'node_group_id': instance.node_group_id, 'instance_id': instance.instance_id, 'instance_name': instance.instance_name, 'event_info': None, }) def add_fail_event(instance, exception): if CONF.disable_event_log: return cluster_id = instance.cluster_id step_id = get_current_provisioning_step(cluster_id) event_info = six.text_type(exception) if step_id: conductor.cluster_event_add(context.ctx(), step_id, { 'successful': False, 'node_group_id': instance.node_group_id, 'instance_id': instance.instance_id, 'instance_name': instance.instance_name, 'event_info': event_info, }) def add_provisioning_step(cluster_id, step_name, total): if (CONF.disable_event_log or not cluster_utils.check_cluster_exists(cluster_id)): return prev_step = get_current_provisioning_step(cluster_id) if prev_step: conductor.cluster_provision_step_update(context.ctx(), prev_step) step_type = context.ctx().current_instance_info.step_type new_step = conductor.cluster_provision_step_add( context.ctx(), cluster_id, { 'step_name': step_name, 'step_type': step_type, 'total': total, 'started_at': timeutils.utcnow(), }) context.current().current_instance_info.step_id = new_step return new_step def get_current_provisioning_step(cluster_id): if (CONF.disable_event_log or not cluster_utils.check_cluster_exists(cluster_id)): return None current_instance_info = context.ctx().current_instance_info return current_instance_info.step_id def event_wrapper(mark_successful_on_exit, **spec): """"General event-log wrapper :param mark_successful_on_exit: should we send success event after execution of function :param spec: extra specification :parameter step: provisioning step name (only for provisioning steps with only one event) :parameter param: tuple (name, pos) with parameter specification, where 'name' is the name of the parameter of function, 'pos' is the position of the parameter of function. This parameter is used to extract info about Instance or Cluster. """ def decorator(func): @functools.wraps(func) def handler(*args, **kwargs): if CONF.disable_event_log: return func(*args, **kwargs) step_name = spec.get('step', None) instance = _find_in_args(spec, *args, **kwargs) cluster_id = instance.cluster_id if not cluster_utils.check_cluster_exists(cluster_id): return func(*args, **kwargs) if step_name: # It's single process, let's add provisioning step here add_provisioning_step(cluster_id, step_name, 1) try: value = func(*args, **kwargs) except Exception as e: with excutils.save_and_reraise_exception(): add_fail_event(instance, e) if mark_successful_on_exit: add_successful_event(instance) return value return handler return decorator def _get_info_from_instance(arg): if isinstance(arg, resource.InstanceResource): return arg return None def _get_info_from_cluster(arg): if isinstance(arg, resource.ClusterResource): return context.InstanceInfo(arg.id) return None def _get_event_info(arg): try: return arg.get_event_info() except AttributeError: return None def _get_info_from_obj(arg): functions = [_get_info_from_instance, _get_info_from_cluster, _get_event_info] for func in functions: value = func(arg) if value: return value return None def _find_in_args(spec, *args, **kwargs): param_values = spec.get('param', None) if param_values: p_name, p_pos = param_values obj = kwargs.get(p_name, None) if obj: return _get_info_from_obj(obj) return _get_info_from_obj(args[p_pos]) # If param is not specified, let's search instance in args for arg in args: val = _get_info_from_instance(arg) if val: return val for arg in kwargs.values(): val = _get_info_from_instance(arg) if val: return val # If instance not found in args, let's get instance info from context return context.ctx().current_instance_info sahara-12.0.0/sahara/utils/hacking/0000775000175000017500000000000013656752227017075 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/hacking/__init__.py0000664000175000017500000000000013656752032021166 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/hacking/logging_checks.py0000664000175000017500000000427713656752032022421 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from hacking import core ALL_LOG_LEVELS = "info|exception|warning|critical|error|debug" RE_ACCEPTED_LOG_LEVELS = re.compile( r"(.)*LOG\.(%(levels)s)\(" % {'levels': ALL_LOG_LEVELS}) # Since _Lx() have been removed, we just need to check _() RE_TRANSLATED_LOG = re.compile( r"(.)*LOG\.(%(levels)s)\(\s*_\(" % {'levels': ALL_LOG_LEVELS}) @core.flake8ext def no_translate_logs(logical_line, filename): """Check for 'LOG.*(_(' Translators don't provide translations for log messages, and operators asked not to translate them. * This check assumes that 'LOG' is a logger. * Use filename so we can start enforcing this in specific folders instead of needing to do so all at once. S373 """ msg = "S373 Don't translate logs" if RE_TRANSLATED_LOG.match(logical_line): yield (0, msg) @core.flake8ext def accepted_log_levels(logical_line, filename): """In Sahara we use only 5 log levels. This check is needed because we don't want new contributors to use deprecated log levels. S374 """ # NOTE(Kezar): sahara/tests included because we don't require translations # in tests. sahara/db/templates provide separate cli interface so we don't # want to translate it. ignore_dirs = ["sahara/db/templates", "sahara/tests"] for directory in ignore_dirs: if directory in filename: return msg = ("S374 You used deprecated log level. Accepted log levels are " "%(levels)s" % {'levels': ALL_LOG_LEVELS}) if logical_line.startswith("LOG."): if not RE_ACCEPTED_LOG_LEVELS.search(logical_line): yield(0, msg) sahara-12.0.0/sahara/utils/hacking/commit_message.py0000664000175000017500000000615213656752032022441 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import re import subprocess # nosec from hacking import core class GitCheck(core.GlobalCheck): """Base-class for Git related checks.""" def _get_commit_title(self): # Check if we're inside a git checkout try: subp = subprocess.Popen( # nosec ['git', 'rev-parse', '--show-toplevel'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) gitdir = subp.communicate()[0].rstrip() except OSError: # "git" was not found return None if not os.path.exists(gitdir): return None # Get title of most recent commit subp = subprocess.Popen( # nosec ['git', 'log', '--no-merges', '--pretty=%s', '-1'], stdout=subprocess.PIPE) title = subp.communicate()[0] if subp.returncode: raise Exception("git log failed with code %s" % subp.returncode) return title.decode('utf-8') class OnceGitCheckCommitTitleBug(GitCheck): """Check git commit messages for bugs. OpenStack HACKING recommends not referencing a bug or blueprint in first line. It should provide an accurate description of the change S364 """ name = "GitCheckCommitTitleBug" # From https://github.com/openstack/openstack-ci-puppet # /blob/master/modules/gerrit/manifests/init.pp#L74 # Changeid|bug|blueprint GIT_REGEX = re.compile( r'(I[0-9a-f]{8,40})|' '([Bb]ug|[Ll][Pp])[\s\#:]*(\d+)|' '([Bb]lue[Pp]rint|[Bb][Pp])[\s\#:]*([A-Za-z0-9\\-]+)') def run_once(self): title = self._get_commit_title() # NOTE(jogo) if match regex but over 3 words, acceptable title if (title and self.GIT_REGEX.search(title) is not None and len(title.split()) <= 3): return (1, 0, "S364: git commit title ('%s') should provide an accurate " "description of the change, not just a reference to a bug " "or blueprint" % title.strip(), self.name) class OnceGitCheckCommitTitleLength(GitCheck): """Check git commit message length. HACKING recommends commit titles 50 chars or less, but enforces a 72 character limit S365 Title limited to 72 chars """ name = "GitCheckCommitTitleLength" def run_once(self): title = self._get_commit_title() if title and len(title) > 72: return ( 1, 0, "S365: git commit title ('%s') should be under 50 chars" % title.strip(), self.name) sahara-12.0.0/sahara/utils/hacking/checks.py0000664000175000017500000000750613656752032020711 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import pycodestyle import re import tokenize from hacking import core RE_OSLO_IMPORTS = (re.compile(r"(((from)|(import))\s+oslo\.)"), re.compile(r"(from\s+oslo\s+import)")) RE_DICT_CONSTRUCTOR_WITH_LIST_COPY = re.compile(r".*\bdict\((\[)?(\(|\[)") RE_USE_JSONUTILS_INVALID_LINE = re.compile(r"(import\s+json)") RE_USE_JSONUTILS_VALID_LINE = re.compile(r"(import\s+jsonschema)") RE_MUTABLE_DEFAULT_ARGS = re.compile(r"^\s*def .+\((.+=\{\}|.+=\[\])") def _starts_with_any(line, *prefixes): for prefix in prefixes: if line.startswith(prefix): return True return False def _any_in(line, *sublines): for subline in sublines: if subline in line: return True return False @core.flake8ext def import_db_only_in_conductor(logical_line, filename): """Check that db calls are only in conductor, plugins module and in tests. S361 """ if _any_in(filename, "sahara/conductor", "sahara/plugins", "sahara/tests", "sahara/db"): return if _starts_with_any(logical_line, "from sahara import db", "from sahara.db", "import sahara.db"): yield (0, "S361: sahara.db import only allowed in " "sahara/conductor/*") @core.flake8ext def hacking_no_author_attr(logical_line, tokens): """__author__ should not be used. S362: __author__ = slukjanov """ for token_type, text, start_index, _, _ in tokens: if token_type == tokenize.NAME and text == "__author__": yield (start_index[1], "S362: __author__ should not be used") @core.flake8ext def check_oslo_namespace_imports(logical_line): """Check to prevent old oslo namespace usage. S363 """ if re.match(RE_OSLO_IMPORTS[0], logical_line): yield(0, "S363: '%s' must be used instead of '%s'." % ( logical_line.replace('oslo.', 'oslo_'), logical_line)) if re.match(RE_OSLO_IMPORTS[1], logical_line): yield(0, "S363: '%s' must be used instead of '%s'" % ( 'import oslo_%s' % logical_line.split()[-1], logical_line)) @core.flake8ext def dict_constructor_with_list_copy(logical_line): """Check to prevent dict constructor with a sequence of key-value pairs. S368 """ if RE_DICT_CONSTRUCTOR_WITH_LIST_COPY.match(logical_line): yield (0, 'S368: Must use a dict comprehension instead of a dict ' 'constructor with a sequence of key-value pairs.') @core.flake8ext def use_jsonutils(logical_line, filename): """Check to prevent importing json in sahara code. S375 """ if pycodestyle.noqa(logical_line): return if (RE_USE_JSONUTILS_INVALID_LINE.match(logical_line) and not RE_USE_JSONUTILS_VALID_LINE.match(logical_line)): yield(0, "S375: Use jsonutils from oslo_serialization instead" " of json") @core.flake8ext def no_mutable_default_args(logical_line): """Check to prevent mutable default argument in sahara code. S360 """ msg = "S360: Method's default argument shouldn't be mutable!" if RE_MUTABLE_DEFAULT_ARGS.match(logical_line): yield (0, msg) sahara-12.0.0/sahara/utils/rpc.py0000664000175000017500000000715413656752032016630 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # Copyright (c) 2013 Julien Danjou # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging import oslo_messaging as messaging from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils from sahara import context MESSAGING_TRANSPORT = None NOTIFICATION_TRANSPORT = None NOTIFIER = None CONF = cfg.CONF LOG = logging.getLogger(__name__) class ContextSerializer(messaging.Serializer): def __init__(self, base): self._base = base def serialize_entity(self, ctxt, entity): return self._base.serialize_entity(ctxt, entity) def deserialize_entity(self, ctxt, entity): return self._base.deserialize_entity(ctxt, entity) @staticmethod def serialize_context(ctxt): return ctxt.to_dict() @staticmethod def deserialize_context(ctxt): pass class JsonPayloadSerializer(messaging.NoOpSerializer): @classmethod def serialize_entity(cls, context, entity): return jsonutils.to_primitive(entity, convert_instances=True) class RPCClient(object): def __init__(self, target): global MESSAGING_TRANSPORT self.__client = messaging.RPCClient( target=target, transport=MESSAGING_TRANSPORT, ) def cast(self, name, **kwargs): ctx = context.current() self.__client.cast(ctx.to_dict(), name, **kwargs) def call(self, name, **kwargs): ctx = context.current() return self.__client.call(ctx.to_dict(), name, **kwargs) class RPCServer(object): def __init__(self, target): global MESSAGING_TRANSPORT access_policy = dispatcher.DefaultRPCAccessPolicy self.__server = messaging.get_rpc_server( target=target, transport=MESSAGING_TRANSPORT, endpoints=[self], executor='eventlet', access_policy=access_policy) def get_service(self): return self.__server def setup_service_messaging(): global MESSAGING_TRANSPORT if MESSAGING_TRANSPORT: # Already is up return MESSAGING_TRANSPORT = messaging.get_rpc_transport(cfg.CONF) def setup_notifications(): global NOTIFICATION_TRANSPORT, NOTIFIER, MESSAGING_TRANSPORT try: NOTIFICATION_TRANSPORT = messaging.get_notification_transport(cfg.CONF) except Exception: LOG.error("Unable to setup notification transport. Reusing " "service transport for that.") setup_service_messaging() NOTIFICATION_TRANSPORT = MESSAGING_TRANSPORT serializer = ContextSerializer(JsonPayloadSerializer()) NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT, serializer=serializer) def setup(service_name): """Initialise the oslo_messaging layer.""" messaging.set_transport_defaults('sahara') setup_notifications() if service_name != 'all-in-one': setup_service_messaging() def get_notifier(publisher_id): """Return a configured oslo_messaging notifier.""" return NOTIFIER.prepare(publisher_id=publisher_id) sahara-12.0.0/sahara/utils/poll_utils.py0000664000175000017500000001253013656752032020224 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import functools from oslo_config import cfg from oslo_log import log as logging from oslo_utils import timeutils from sahara import context from sahara import exceptions as ex from sahara.utils import cluster as cluster_utils LOG = logging.getLogger(__name__) # set 3 hours timeout by default DEFAULT_TIMEOUT = 10800 DEFAULT_SLEEP_TIME = 5 timeouts_opts = [ # engine opts cfg.IntOpt('ips_assign_timeout', default=DEFAULT_TIMEOUT, help="Assign IPs timeout, in seconds"), cfg.IntOpt('wait_until_accessible', default=DEFAULT_TIMEOUT, help="Wait for instance accessibility, in seconds"), # direct engine opts cfg.IntOpt('delete_instances_timeout', default=DEFAULT_TIMEOUT, help="Wait for instances to be deleted, in seconds"), # volumes opts cfg.IntOpt( 'detach_volume_timeout', default=300, help='Timeout for detaching volumes from instance, in seconds'), ] timeouts = cfg.OptGroup(name='timeouts', title='Sahara timeouts') CONF = cfg.CONF CONF.register_group(timeouts) CONF.register_opts(timeouts_opts, group=timeouts) def _get_consumed(started_at): return timeutils.delta_seconds(started_at, timeutils.utcnow()) def _get_current_value(cluster, option): option_target = option.applicable_target conf = cluster.cluster_configs if option_target in conf and option.name in conf[option_target]: return conf[option_target][option.name] return option.default_value def poll(get_status, kwargs=None, args=None, operation_name=None, timeout_name=None, timeout=DEFAULT_TIMEOUT, sleep=DEFAULT_SLEEP_TIME, exception_strategy='raise'): """This util poll status of object obj during some timeout. :param get_status: function, which return current status of polling as Boolean :param kwargs: keyword arguments of function get_status :param operation_name: name of polling process :param timeout_name: name of timeout option :param timeout: value of timeout in seconds. By default, it equals to 3 hours :param sleep: duration between two consecutive executions of get_status function :param exception_strategy: possible values ('raise', 'mark_as_true', 'mark_as_false'). If exception_strategy is 'raise' exception would be raised. If exception_strategy is 'mark_as_true', return value of get_status would marked as True, and in case of 'mark_as_false' - False. By default it's 'raise'. """ start_time = timeutils.utcnow() # We shouldn't raise TimeoutException if incorrect timeout specified and # status is ok now. In such way we should execute get_status at least once. at_least_once = True if not kwargs: kwargs = {} if not args: args = () while at_least_once or _get_consumed(start_time) < timeout: at_least_once = False try: status = get_status(*args, **kwargs) except BaseException: if exception_strategy == 'raise': raise elif exception_strategy == 'mark_as_true': status = True else: status = False if status: operation = "Operation" if operation_name: operation = "Operation with name {op_name}".format( op_name=operation_name) LOG.debug( '{operation_desc} was executed successfully in timeout ' '{timeout}' .format(operation_desc=operation, timeout=timeout)) return context.sleep(sleep) raise ex.TimeoutException(timeout, operation_name, timeout_name) def plugin_option_poll(cluster, get_status, option, operation_name, sleep_time, kwargs): def _get(n_cluster, n_kwargs): if not cluster_utils.check_cluster_exists(n_cluster): return True return get_status(**n_kwargs) poll_description = { 'get_status': _get, 'kwargs': {'n_cluster': cluster, 'n_kwargs': kwargs}, 'timeout': _get_current_value(cluster, option), 'operation_name': operation_name, 'sleep': sleep_time, 'timeout_name': option.name } poll(**poll_description) def poll_status(option, operation_name, sleep): def decorator(f): @functools.wraps(f) def handler(*args, **kwargs): poll_description = { 'get_status': f, 'kwargs': kwargs, 'args': args, 'timeout': getattr(CONF.timeouts, option), 'operation_name': operation_name, 'timeout_name': option, 'sleep': sleep, } poll(**poll_description) return handler return decorator sahara-12.0.0/sahara/utils/wsgi.py0000664000175000017500000000502613656752032017011 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Only (de)serialization utils hasn't been removed to decrease requirements # number. """Utility methods for working with WSGI servers.""" import datetime from oslo_serialization import jsonutils import six from sahara import exceptions from sahara.i18n import _ class ActionDispatcher(object): """Maps method name to local methods through action name.""" def dispatch(self, *args, **kwargs): """Find and call local method.""" action = kwargs.pop('action', 'default') action_method = getattr(self, str(action), self.default) return action_method(*args, **kwargs) def default(self, data): raise NotImplementedError() class DictSerializer(ActionDispatcher): """Default request body serialization.""" def serialize(self, data, action='default'): return self.dispatch(data, action=action) def default(self, data): return "" class JSONDictSerializer(DictSerializer): """Default JSON request body serialization.""" def default(self, data): def sanitizer(obj): if isinstance(obj, datetime.datetime): _dtime = obj - datetime.timedelta(microseconds=obj.microsecond) return _dtime.isoformat() return six.text_type(obj) return jsonutils.dumps(data, default=sanitizer) class TextDeserializer(ActionDispatcher): """Default request body deserialization.""" def deserialize(self, datastring, action='default'): return self.dispatch(datastring, action=action) def default(self, datastring): return {} class JSONDeserializer(TextDeserializer): def _from_json(self, datastring): try: return jsonutils.loads(datastring) except ValueError: msg = _("cannot understand JSON") raise exceptions.MalformedRequestBody(msg) def default(self, datastring): return {'body': self._from_json(datastring)} sahara-12.0.0/sahara/utils/xmlutils.py0000664000175000017500000001356313656752032017726 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re import xml.dom.minidom as xml import pkg_resources as pkg # hadoop.xml related utils def load_hadoop_xml_defaults(file_name, package='sahara'): doc = load_xml_document(file_name, package=package) configs = [] prop = doc.getElementsByTagName('property') for elements in prop: configs.append({ "name": get_text_from_node(elements, 'name'), "value": _adjust_field(get_text_from_node(elements, 'value')), "description": _adjust_field( get_text_from_node(elements, 'description')) }) return configs def parse_hadoop_xml_with_name_and_value(data): doc = xml.parseString(data) configs = [] prop = doc.getElementsByTagName('property') for elements in prop: configs.append({ 'name': get_text_from_node(elements, 'name'), 'value': get_text_from_node(elements, 'value') }) return configs def _get_node_element(element, name): element = element.getElementsByTagName(name) return element[0] if element and element[0].hasChildNodes() else None def create_hadoop_xml(configs, config_filter=None): doc = xml.Document() pi = doc.createProcessingInstruction('xml-stylesheet', 'type="text/xsl" ' 'href="configuration.xsl"') doc.insertBefore(pi, doc.firstChild) # Create the base element configuration = doc.createElement('configuration') doc.appendChild(configuration) default_configs = [] if config_filter is not None: default_configs = [cfg['name'] for cfg in config_filter] for name in sorted(configs): if name in default_configs or config_filter is None: add_property_to_configuration(doc, name, configs[name]) # Return newly created XML return doc.toprettyxml(indent=" ") def create_elements_xml(configs): doc = xml.Document() text = '' for name in sorted(configs): element = doc.createElement('property') add_text_element_to_element(doc, element, 'name', name) add_text_element_to_element(doc, element, 'value', configs[name]) text += element.toprettyxml(indent=" ") return text # basic utils def load_xml_document(file_name, strip=False, package='sahara'): fname = pkg.resource_filename(package, file_name) if strip: with open(fname, "r") as f: doc = "".join(line.strip() for line in f) return xml.parseString(doc) else: return xml.parse(fname) def get_text_from_node(element, name): element = element.getElementsByTagName(name) if element else None return element[0].firstChild.nodeValue if ( element and element[0].hasChildNodes()) else '' def _adjust_field(text): return re.sub(r"\n *|\t", "", str(text)) def add_property_to_configuration(doc, name, value): prop = add_child(doc, 'configuration', 'property') add_text_element_to_element(doc, prop, 'name', name) add_text_element_to_element(doc, prop, 'value', value) def add_properties_to_configuration(doc, parent_for_conf, configs): get_and_create_if_not_exist(doc, parent_for_conf, 'configuration') for n in sorted(filter(lambda x: x, configs)): add_property_to_configuration(doc, n, configs[n]) def add_child(doc, parent, tag_to_add): actions = doc.getElementsByTagName(parent) actions[0].appendChild(doc.createElement(tag_to_add)) return actions[0].lastChild def add_element(doc, parent, element): actions = doc.getElementsByTagName(parent) actions[0].appendChild(element) return actions[0].lastChild def get_and_create_if_not_exist(doc, parent, element): prop = doc.getElementsByTagName(element) if len(prop) != 0: return prop[0] return add_child(doc, parent, element) def add_text_element_to_tag(doc, parent_tag, element, value): prop = add_child(doc, parent_tag, element) prop.appendChild(doc.createTextNode(str(value))) def add_text_element_to_element(doc, parent, element, value): parent.appendChild(doc.createElement(element)) try: parent.lastChild.appendChild(doc.createTextNode(str(value))) except UnicodeEncodeError: parent.lastChild.appendChild(doc.createTextNode( str(value.encode('utf8')))) def add_equal_separated_dict(doc, parent_tag, each_elem_tag, value): for k in sorted(filter(lambda x: x, value)): if k: add_text_element_to_tag(doc, parent_tag, each_elem_tag, "%s=%s" % (k, value[k])) def add_attributes_to_element(doc, tag, attributes): element = doc.getElementsByTagName(tag)[0] for name, value in attributes.items(): element.setAttribute(name, value) def add_tagged_list(doc, parent_tag, each_elem_tag, values): for v in values: add_text_element_to_tag(doc, parent_tag, each_elem_tag, v) def get_property_dict(elem): res = {} properties = elem.getElementsByTagName('property') for prop in properties: k = get_text_from_node(prop, 'name') v = get_text_from_node(prop, 'value') res[k] = v return res def get_param_dict(elem): res = {} params = elem.getElementsByTagName('param') for param in params: k, v = param.firstChild.nodeValue.split('=') res[k] = v return res sahara-12.0.0/sahara/utils/edp.py0000664000175000017500000001110313656752032016601 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import uuidutils from sahara.utils import files # job execution status JOB_STATUS_DONEWITHERROR = 'DONEWITHERROR' JOB_STATUS_FAILED = 'FAILED' JOB_STATUS_KILLED = 'KILLED' JOB_STATUS_PENDING = 'PENDING' JOB_STATUS_READYTORUN = 'READYTORUN' JOB_STATUS_RUNNING = 'RUNNING' JOB_STATUS_SUCCEEDED = 'SUCCEEDED' JOB_STATUS_TOBEKILLED = 'TOBEKILLED' JOB_STATUS_TOBESUSPENDED = 'TOBESUSPENDED' JOB_STATUS_PREP = 'PREP' JOB_STATUS_PREPSUSPENDED = 'PREPSUSPENDED' JOB_STATUS_SUSPENDED = 'SUSPENDED' JOB_STATUS_SUSPEND_FAILED = 'SUSPENDFAILED' # statuses for suspended jobs JOB_STATUSES_SUSPENDIBLE = [ JOB_STATUS_PREP, JOB_STATUS_RUNNING ] # statuses for terminated jobs JOB_STATUSES_TERMINATED = [ JOB_STATUS_DONEWITHERROR, JOB_STATUS_FAILED, JOB_STATUS_KILLED, JOB_STATUS_SUCCEEDED, JOB_STATUS_SUSPEND_FAILED ] # job type separator character JOB_TYPE_SEP = '.' # job sub types available JOB_SUBTYPE_STREAMING = 'Streaming' JOB_SUBTYPE_NONE = '' # job types available JOB_TYPE_HIVE = 'Hive' JOB_TYPE_JAVA = 'Java' JOB_TYPE_MAPREDUCE = 'MapReduce' JOB_TYPE_SPARK = 'Spark' JOB_TYPE_STORM = 'Storm' JOB_TYPE_PYLEUS = 'Storm.Pyleus' JOB_TYPE_MAPREDUCE_STREAMING = (JOB_TYPE_MAPREDUCE + JOB_TYPE_SEP + JOB_SUBTYPE_STREAMING) JOB_TYPE_PIG = 'Pig' JOB_TYPE_SHELL = 'Shell' # job type groupings available JOB_TYPES_ALL = [ JOB_TYPE_HIVE, JOB_TYPE_JAVA, JOB_TYPE_MAPREDUCE, JOB_TYPE_MAPREDUCE_STREAMING, JOB_TYPE_PIG, JOB_TYPE_SHELL, JOB_TYPE_SPARK, JOB_TYPE_STORM, JOB_TYPE_PYLEUS ] JOB_TYPES_ACCEPTABLE_CONFIGS = { JOB_TYPE_HIVE: {"configs", "params"}, JOB_TYPE_PIG: {"configs", "params", "args"}, JOB_TYPE_MAPREDUCE: {"configs"}, JOB_TYPE_MAPREDUCE_STREAMING: {"configs"}, JOB_TYPE_JAVA: {"configs", "args"}, JOB_TYPE_SHELL: {"configs", "params", "args"}, JOB_TYPE_SPARK: {"configs", "args"}, JOB_TYPE_STORM: {"args"}, JOB_TYPE_PYLEUS: {} } # job actions JOB_ACTION_SUSPEND = 'suspend' JOB_ACTION_CANCEL = 'cancel' JOB_ACTION_TYPES_ACCEPTABLE = [ JOB_ACTION_SUSPEND, JOB_ACTION_CANCEL ] ADAPT_FOR_OOZIE = 'edp.java.adapt_for_oozie' SPARK_DRIVER_CLASSPATH = 'edp.spark.driver.classpath' ADAPT_SPARK_FOR_SWIFT = 'edp.spark.adapt_for_swift' def split_job_type(job_type): '''Split a job type string into a type and subtype The split is done on the first '.'. A subtype will always be returned, even if it is empty. ''' type_info = job_type.split(JOB_TYPE_SEP, 1) if len(type_info) == 1: type_info.append('') return type_info def compare_job_type(job_type, *args, **kwargs): '''Compare a job type against a list of job types :param job_type: The job type being compared :param *args: A list of types to compare against :param strict: Passed as a keyword arg. Default is False. If strict is False, job_type will be compared with and without its subtype indicator. :returns: True if job_type is present in the list, False otherwise ''' strict = kwargs.get('strict', False) res = job_type in args if res or strict or JOB_TYPE_SEP not in job_type: return res jtype, jsubtype = split_job_type(job_type) return jtype in args def get_hive_shared_conf_path(hdfs_user): return "/user/%s/conf/hive-site.xml" % hdfs_user def is_adapt_for_oozie_enabled(configs): return configs.get(ADAPT_FOR_OOZIE, False) def is_adapt_spark_for_swift_enabled(configs): return configs.get(ADAPT_SPARK_FOR_SWIFT, False) def spark_driver_classpath(configs): # Return None in case when you need to use default value return configs.get(SPARK_DRIVER_CLASSPATH) def get_builtin_binaries(job, configs): if job.type == JOB_TYPE_JAVA: if is_adapt_for_oozie_enabled(configs): path = 'service/edp/resources/edp-main-wrapper.jar' name = 'builtin-%s.jar' % uuidutils.generate_uuid() return [{'raw': files.get_file_binary(path), 'name': name}] return [] sahara-12.0.0/sahara/utils/resources.py0000664000175000017500000000412513656752032020051 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import six class BaseResource(object): __resource_name__ = 'base' __filter_cols__ = [] @property def dict(self): return self.to_dict() @property def wrapped_dict(self): return {self.__resource_name__: self.dict} @property def __all_filter_cols__(self): cls = self.__class__ if not hasattr(cls, '__mro_filter_cols__'): filter_cols = [] for base_cls in inspect.getmro(cls): filter_cols += getattr(base_cls, '__filter_cols__', []) cls.__mro_filter_cols__ = set(filter_cols) return cls.__mro_filter_cols__ def _filter_field(self, k): return k == '_sa_instance_state' or k in self.__all_filter_cols__ def to_dict(self): dictionary = self.__dict__.copy() return {k: v for k, v in six.iteritems(dictionary) if not self._filter_field(k)} def as_resource(self): return Resource(self.__resource_name__, self.to_dict()) class Resource(BaseResource): def __init__(self, _name, _info): self._name = _name self.__resource_name__ = _name self._info = _info def __getattr__(self, k): if k not in self.__dict__: return self._info.get(k) return self.__dict__[k] def __repr__(self): return '<%s %s>' % (self._name, self._info) def __eq__(self, other): return self._name == other._name and self._info == other._info def to_dict(self): return self._info.copy() sahara-12.0.0/sahara/utils/network.py0000664000175000017500000000310113656752032017521 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg CONF = cfg.CONF def has_floating_ip(instance): # Alternatively in each of these cases # we could use the nova client to look up the # ips for the instance and check the attributes # to ensure that the management_ip is a floating # ip, but a simple comparison with the internal_ip # corresponds with the logic in # sahara.service.networks.init_instances_ips if not instance.node_group.floating_ip_pool: return False # in the neutron case comparing ips is an extra simple check ... # maybe allocation of a floating ip failed for some reason # (Alternatively in each of these cases # we could use the nova client to look up the # ips for the instance and check the attributes # to ensure that the management_ip is a floating # ip, but a simple comparison with the internal_ip # corresponds with the logic in # sahara.service.networks.init_instances_ips) return instance.management_ip != instance.internal_ip sahara-12.0.0/sahara/utils/procutils.py0000664000175000017500000000570613656752032020071 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import pickle # nosec import sys from eventlet.green import subprocess from eventlet import timeout as e_timeout from sahara import context from sahara import exceptions def _get_sub_executable(): return '%s/_sahara-subprocess' % os.path.dirname(sys.argv[0]) def start_subprocess(): return subprocess.Popen((sys.executable, _get_sub_executable()), close_fds=True, bufsize=0, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) def run_in_subprocess(proc, func, args=None, kwargs=None, interactive=False): args = args or () kwargs = kwargs or {} try: # TODO(elmiko) these pickle usages should be reinvestigated to # determine a more secure manner to deploy remote commands. pickle.dump(func, proc.stdin, protocol=2) # nosec pickle.dump(args, proc.stdin, protocol=2) # nosec pickle.dump(kwargs, proc.stdin, protocol=2) # nosec proc.stdin.flush() if not interactive: result = pickle.load(proc.stdout) # nosec if 'exception' in result: raise exceptions.SubprocessException(result['exception']) return result['output'] finally: # NOTE(dmitryme): in oslo.concurrency's file processutils.py it # is suggested to sleep a little between calls to multiprocessing. # That should allow it make some necessary cleanup context.sleep(0) def _finish(cleanup_func): cleanup_func() sys.stdin.close() sys.stdout.close() sys.stderr.close() sys.exit(0) def shutdown_subprocess(proc, cleanup_func): try: with e_timeout.Timeout(5): # timeout would mean that our single-threaded subprocess # is hung on previous task which blocks _finish to complete run_in_subprocess(proc, _finish, (cleanup_func,)) except BaseException: # exception could be caused by either timeout, or # successful shutdown, ignoring anyway pass finally: kill_subprocess(proc) def kill_subprocess(proc): proc.stdin.close() proc.stdout.close() proc.stderr.close() try: proc.kill() proc.wait() except OSError: # could be caused by process already dead, so ignoring pass sahara-12.0.0/sahara/utils/remote.py0000664000175000017500000001260113656752032017330 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # Copyright (c) 2013 Hortonworks, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import abc from oslo_config import cfg import six from sahara import exceptions as ex from sahara.i18n import _ # These options are for SSH remote only ssh_opts = [ cfg.IntOpt('global_remote_threshold', default=100, help='Maximum number of remote operations that will ' 'be running at the same time. Note that each ' 'remote operation requires its own process to ' 'run.'), cfg.IntOpt('cluster_remote_threshold', default=70, help='The same as global_remote_threshold, but for ' 'a single cluster.'), cfg.StrOpt('proxy_command', default='', help='Proxy command used to connect to instances. If set, this ' 'command should open a netcat socket, that Sahara will use for ' 'SSH and HTTP connections. Use {host} and {port} to describe ' 'the destination. Other available keywords: {tenant_id}, ' '{network_id}, {router_id}.'), cfg.BoolOpt('proxy_command_use_internal_ip', default=False, help='Force proxy_command usage to be consuming internal IP ' 'always, instead of management IP. Ignored if proxy_command ' 'is not set.') ] CONF = cfg.CONF CONF.register_opts(ssh_opts) DRIVER = None @six.add_metaclass(abc.ABCMeta) class RemoteDriver(object): @abc.abstractmethod def setup_remote(self, engine): """Performs driver initialization.""" @abc.abstractmethod def get_remote(self, instance): """Returns driver specific Remote.""" @abc.abstractmethod def get_userdata_template(self): """Returns userdata template preparing instance to work with driver.""" @abc.abstractmethod def get_type_and_version(self): """Returns engine type and version Result should be in the form 'type.major.minor'. """ @six.add_metaclass(abc.ABCMeta) class TerminalOnlyRemote(object): @abc.abstractmethod def execute_command(self, cmd, run_as_root=False, get_stderr=False, raise_when_error=True, timeout=300): """Execute specified command remotely using existing ssh connection. Return exit code, stdout data and stderr data of the executed command. """ @abc.abstractmethod def get_os_distrib(self): """Returns the OS distribution running on the target machine.""" @six.add_metaclass(abc.ABCMeta) class Remote(TerminalOnlyRemote): @abc.abstractmethod def get_neutron_info(self): """Returns dict which later could be passed to get_http_client.""" @abc.abstractmethod def get_http_client(self, port, info=None): """Returns HTTP client for a given instance's port.""" @abc.abstractmethod def close_http_session(self, port): """Closes cached HTTP session for a given instance's port.""" @abc.abstractmethod def write_file_to(self, remote_file, data, run_as_root=False, timeout=120): """Create remote file and write the given data to it. Uses existing ssh connection. """ @abc.abstractmethod def append_to_file(self, r_file, data, run_as_root=False, timeout=120): """Append the given data to remote file. Uses existing ssh connection. """ @abc.abstractmethod def write_files_to(self, files, run_as_root=False, timeout=120): """Copy file->data dictionary in a single ssh connection.""" @abc.abstractmethod def append_to_files(self, files, run_as_root=False, timeout=120): """Copy file->data dictionary in a single ssh connection.""" @abc.abstractmethod def read_file_from(self, remote_file, run_as_root=False, timeout=120): """Read remote file from the specified host and return given data.""" @abc.abstractmethod def replace_remote_string(self, remote_file, old_str, new_str, timeout=120): """Replaces strings in remote file using sed command.""" def setup_remote(driver, engine): global DRIVER DRIVER = driver DRIVER.setup_remote(engine) def get_remote_type_and_version(): return DRIVER.get_type_and_version() def _check_driver_is_loaded(): if not DRIVER: raise ex.SystemError(_('Remote driver is not loaded. Most probably ' 'you see this error because you are running ' 'Sahara in distributed mode and it is broken.' 'Try running sahara-all instead.')) def get_remote(instance): """Returns Remote for a given instance.""" _check_driver_is_loaded() return DRIVER.get_remote(instance) def get_userdata_template(): """Returns userdata template as a string.""" _check_driver_is_loaded() return DRIVER.get_userdata_template() sahara-12.0.0/sahara/utils/tempfiles.py0000664000175000017500000000220213656752032020021 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import shutil import tempfile from sahara import exceptions as ex from sahara.i18n import _ @contextlib.contextmanager def tempdir(**kwargs): argdict = kwargs.copy() if 'dir' not in argdict: argdict['dir'] = '/tmp/' tmpdir = tempfile.mkdtemp(**argdict) try: yield tmpdir finally: try: shutil.rmtree(tmpdir) except OSError as e: raise ex.SystemError( _("Failed to delete temp dir %(dir)s (reason: %(reason)s)") % {'dir': tmpdir, 'reason': e}) sahara-12.0.0/sahara/utils/crypto.py0000664000175000017500000000444513656752032017364 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from oslo_concurrency import processutils import paramiko import six from sahara import exceptions as ex from sahara.i18n import _ from sahara.utils import tempfiles def to_paramiko_private_key(pkey): """Convert private key (str) to paramiko-specific RSAKey object.""" return paramiko.RSAKey(file_obj=six.StringIO(pkey)) def generate_key_pair(key_length=2048): """Create RSA key pair with specified number of bits in key. Returns tuple of private and public keys. """ with tempfiles.tempdir() as tmpdir: keyfile = os.path.join(tmpdir, 'tempkey') # The key is generated in the old PEM format, instead of the native # format of OpenSSH >=6.5, because paramiko does not support it: # https://github.com/paramiko/paramiko/issues/602 args = [ 'ssh-keygen', '-q', # quiet '-N', '', # w/o passphrase '-m', 'PEM', # old PEM format '-t', 'rsa', # create key of rsa type '-f', keyfile, # filename of the key file '-C', 'Generated-by-Sahara' # key comment ] if key_length is not None: args.extend(['-b', key_length]) processutils.execute(*args) if not os.path.exists(keyfile): raise ex.SystemError(_("Private key file hasn't been created")) with open(keyfile) as keyfile_fd: private_key = keyfile_fd.read() public_key_path = keyfile + '.pub' if not os.path.exists(public_key_path): raise ex.SystemError(_("Public key file hasn't been created")) with open(public_key_path) as public_key_path_fd: public_key = public_key_path_fd.read() return private_key, public_key sahara-12.0.0/sahara/utils/general.py0000664000175000017500000000404313656752032017453 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re import six NATURAL_SORT_RE = re.compile('([0-9]+)') def find_dict(iterable, **rules): """Search for dict in iterable of dicts using specified key-value rules.""" for item in iterable: # assert all key-value pairs from rules dict ok = True for k, v in six.iteritems(rules): ok = ok and k in item and item[k] == v if ok: return item return None def find(lst, **kwargs): for obj in lst: match = True for attr, value in kwargs.items(): if getattr(obj, attr) != value: match = False if match: return obj return None def get_by_id(lst, id): for obj in lst: if obj.id == id: return obj return None # Taken from http://stackoverflow.com/questions/4836710/does- # python-have-a-built-in-function-for-string-natural-sort def natural_sort_key(s): return [int(text) if text.isdigit() else text.lower() for text in re.split(NATURAL_SORT_RE, s)] def generate_instance_name(cluster_name, node_group_name, index): return ("%s-%s-%03d" % (cluster_name, node_group_name, index)).lower() def generate_auto_security_group_name(node_group): return ("%s-%s-%s" % (node_group.cluster.name, node_group.name, node_group.id[:8])).lower() def generate_aa_group_name(cluster_name, server_group_index): return ("%s-aa-group-%d" % (cluster_name, server_group_index)).lower() sahara-12.0.0/sahara/utils/types.py0000664000175000017500000000530613656752032017205 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara import exceptions as ex class FrozenList(list): def append(self, p_object): raise ex.FrozenClassError(self) def extend(self, iterable): raise ex.FrozenClassError(self) def insert(self, index, p_object): raise ex.FrozenClassError(self) def pop(self, index=None): raise ex.FrozenClassError(self) def remove(self, value): raise ex.FrozenClassError(self) def reverse(self): raise ex.FrozenClassError(self) def sort(self, cmp=None, key=None, reverse=False): raise ex.FrozenClassError(self) def __add__(self, y): raise ex.FrozenClassError(self) def __delitem__(self, y): raise ex.FrozenClassError(self) def __delslice__(self, i, j): raise ex.FrozenClassError(self) def __iadd__(self, y): raise ex.FrozenClassError(self) def __imul__(self, y): raise ex.FrozenClassError(self) def __setitem__(self, i, y): raise ex.FrozenClassError(self) def __setslice__(self, i, j, y): raise ex.FrozenClassError(self) class FrozenDict(dict): def clear(self): raise ex.FrozenClassError(self) def pop(self, k, d=None, force=False): if force: return super(FrozenDict, self).pop(k, d) raise ex.FrozenClassError(self) def popitem(self): raise ex.FrozenClassError(self) def setdefault(self, k, d=None): raise ex.FrozenClassError(self) def update(self, E=None, **F): raise ex.FrozenClassError(self) def __delitem__(self, y): raise ex.FrozenClassError(self) def __setitem__(self, i, y): raise ex.FrozenClassError(self) def is_int(s): try: int(s) return True except Exception: return False def transform_to_num(s): # s can be a string or non-string. try: return int(str(s)) except ValueError: try: return float(str(s)) except ValueError: return s class Page(list): def __init__(self, l, prev=None, next=None): super(Page, self).__init__(l) self.prev = prev self.next = next sahara-12.0.0/sahara/utils/configs.py0000664000175000017500000000204413656752032017465 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. def merge_configs(*configs): """Merge configs in special format. It supports merging of configs in the following format: applicable_target -> config_name -> config_value """ result = {} for config in configs: if config: for a_target in config: if a_target not in result or not result[a_target]: result[a_target] = {} result[a_target].update(config[a_target]) return result sahara-12.0.0/sahara/utils/api_validator.py0000664000175000017500000001316713656752032020663 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re import jsonschema from oslo_utils import uuidutils import six from sahara.service.edp.job_binaries import manager as jb_manager @jsonschema.FormatChecker.cls_checks('valid_name_hostname') def validate_name_hostname_format(entry): if not isinstance(entry, six.string_types) or not entry: # should fail type or length validation return True res = re.match(r"^(([a-zA-Z]|[a-zA-Z][a-zA-Z0-9\-]" r"*[a-zA-Z0-9])\.)*([A-Za-z]|[A-Za-z]" r"[A-Za-z0-9\-]*[A-Za-z0-9])$", entry) return res is not None @jsonschema.FormatChecker.cls_checks('valid_name') def validate_name_format(entry): if not isinstance(entry, six.string_types): # should fail type validation return True res = re.match(r"^[a-zA-Z0-9][a-zA-Z0-9\-_\.]*$", entry) return res is not None @jsonschema.FormatChecker.cls_checks('valid_keypair_name') def validate_keypair_name_format(entry): if not isinstance(entry, six.string_types): # should fail type validation return True # this follows the validation put forth by nova for keypair names res = re.match(r'^[a-zA-Z0-9\-_ ]+$', entry) return res is not None @jsonschema.FormatChecker.cls_checks('valid_job_location') def validate_job_location_format(entry): if not isinstance(entry, six.string_types): # should fail type validation return True return jb_manager.JOB_BINARIES \ .get_job_binary_by_url(entry) \ .validate_job_location_format(entry) @jsonschema.FormatChecker.cls_checks('valid_tag') def validate_valid_tag_format(entry): if not isinstance(entry, six.string_types): # should fail type validation return True res = re.match(r"^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-_]" r"*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9]" r"[A-Za-z0-9\-_]*[A-Za-z0-9])$", entry) return res is not None @jsonschema.FormatChecker.cls_checks('uuid') def validate_uuid_format(entry): if not isinstance(entry, six.string_types): # should fail type validation return True return uuidutils.is_uuid_like(entry) @jsonschema.FormatChecker.cls_checks('posix_path') def validate_posix_path(entry): if not isinstance(entry, six.string_types): # should fail type validation return True res = re.match("^(/([A-Z]|[a-z]|[0-9]|\-|_)+)+$", entry) return res is not None class ConfigTypeMeta(type): def __instancecheck__(cls, instance): # configs should be dict if not isinstance(instance, dict): return False # check dict content for applicable_target, configs in six.iteritems(instance): # upper-level dict keys (applicable targets) should be strings if not isinstance(applicable_target, six.string_types): return False # upper-level dict values should be dicts if not isinstance(configs, dict): return False # check internal dict content for config_name, config_value in six.iteritems(configs): # internal dict keys should be strings if not isinstance(config_name, six.string_types): return False # internal dict values should be strings or integers or bools if not isinstance(config_value, (six.string_types, six.integer_types)): return False return True class SimpleConfigTypeMeta(type): def __instancecheck__(cls, instance): # configs should be dict if not isinstance(instance, dict): return False # check dict content for conf_name, conf_value in six.iteritems(instance): # keys should be strings, values should be int, string or bool if not isinstance(conf_name, six.string_types): return False if not isinstance(conf_value, (six.string_types, six.integer_types)): return False return True @six.add_metaclass(ConfigTypeMeta) class ConfigsType(dict): pass @six.add_metaclass(SimpleConfigTypeMeta) class SimpleConfigsType(dict): pass class FlavorTypeMeta(type): def __instancecheck__(cls, instance): try: int(instance) except (ValueError, TypeError): return (isinstance(instance, six.string_types) and uuidutils.is_uuid_like(instance)) return (isinstance(instance, six.integer_types + six.string_types) and type(instance) != bool) @six.add_metaclass(FlavorTypeMeta) class FlavorType(object): pass class ApiValidator(jsonschema.Draft4Validator): def __init__(self, schema): format_checker = jsonschema.FormatChecker() super(ApiValidator, self).__init__( schema, format_checker=format_checker, types={ "configs": ConfigsType, "flavor": FlavorType, "simple_config": SimpleConfigsType, }) sahara-12.0.0/sahara/utils/openstack/0000775000175000017500000000000013656752227017460 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/openstack/__init__.py0000664000175000017500000000000013656752032021551 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/utils/openstack/keystone.py0000664000175000017500000002353113656752032021671 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re from keystoneauth1 import identity as keystone_identity from keystoneclient.v2_0 import client as keystone_client from keystoneclient.v3 import client as keystone_client_v3 from oslo_config import cfg from oslo_log import log as logging from sahara import context from sahara.service import sessions from sahara.utils.openstack import base LOG = logging.getLogger(__name__) def _get_keystoneauth_cfg(name): """get the keystone auth cfg Fetch value of keystone_authtoken group from config file when not available as part of GroupAttr. :rtype: String :param name: property name to be retrieved """ try: value_list = CONF._namespace._get_file_value([('keystone_authtoken', name)]) if isinstance(value_list, tuple): value_list = value_list[0] cfg_val = value_list[0] if name == "auth_url" and not re.findall(r'\/v[2-3].*', cfg_val): cfg_val += "/v3" return cfg_val except KeyError: if name in ["user_domain_name", "project_domain_name"]: return "Default" else: raise def validate_config(): if any(map(lambda o: getattr(CONF.trustee, o) is None, CONF.trustee)): for replace_opt in CONF.trustee: CONF.set_override(replace_opt, _get_keystoneauth_cfg(replace_opt), group="trustee") LOG.warning(""" __ __ _ \ \ / /_ _ _ __ _ __ (_)_ __ __ _ \ \ /\ / / _` | '__| '_ \| | '_ \ / _` | \ V V / (_| | | | | | | | | | | (_| | \_/\_/ \__,_|_| |_| |_|_|_| |_|\__, | |___/ Using the [keystone_authtoken] user as the Sahara trustee user directly is deprecated. Please add the trustee credentials you need to the [trustee] section of your sahara.conf file. """) opts = [ # TODO(alazarev) Move to [keystone] section cfg.BoolOpt('use_identity_api_v3', default=True, help='Enables Sahara to use Keystone API v3. ' 'If that flag is disabled, ' 'per-job clusters will not be terminated ' 'automatically.') ] ssl_opts = [ cfg.BoolOpt('api_insecure', default=False, help='Allow to perform insecure SSL requests to keystone.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for keystone ' 'client requests.'), cfg.StrOpt("endpoint_type", default="internalURL", help="Endpoint type for keystone client requests") ] keystone_group = cfg.OptGroup(name='keystone', title='Keystone client options') trustee_opts = [ cfg.StrOpt('username', help='Username for trusts creation'), cfg.StrOpt('password', help='Password for trusts creation'), cfg.StrOpt('project_name', help='Project name for trusts creation'), cfg.StrOpt('user_domain_name', help='User domain name for trusts creation', default="Default"), cfg.StrOpt('project_domain_name', help='Project domain name for trusts creation', default="Default"), cfg.StrOpt('auth_url', help='Auth url for trusts creation'), ] trustee_group = cfg.OptGroup(name='trustee', title="Trustee options") CONF = cfg.CONF CONF.register_group(keystone_group) CONF.register_group(trustee_group) CONF.register_opts(opts) CONF.register_opts(ssl_opts, group=keystone_group) CONF.register_opts(trustee_opts, group=trustee_group) def auth(): '''Return a token auth plugin for the current context.''' ctx = context.current() return ctx.auth_plugin or token_auth(token=context.get_auth_token(), project_id=ctx.tenant_id) def auth_for_admin(project_name=None, trust_id=None): '''Return an auth plugin for the admin. :param project_name: a project to scope the auth with (optional). :param trust_id: a trust to scope the auth with (optional). :returns: an auth plugin object for the admin. ''' # TODO(elmiko) revisit the project_domain_name if we start getting # into federated authentication. it will need to match the domain that # the project_name exists in. auth = _password_auth( username=CONF.trustee.username, password=CONF.trustee.password, project_name=project_name, user_domain_name=CONF.trustee.user_domain_name, project_domain_name=CONF.trustee.project_domain_name, trust_id=trust_id) return auth def auth_for_proxy(username, password, trust_id=None): '''Return an auth plugin for the proxy user. :param username: the name of the proxy user. :param password: the proxy user's password. :param trust_id: a trust to scope the auth with (optional). :returns: an auth plugin object for the proxy user. ''' auth = _password_auth( username=username, password=password, user_domain_name=CONF.proxy_user_domain_name, trust_id=trust_id) return auth def client(): '''Return the current context client.''' return client_from_auth(auth()) def client_for_admin(): '''Return the Sahara admin user client.''' auth = auth_for_admin( project_name=CONF.trustee.project_name) return client_from_auth(auth) def client_from_auth(auth): '''Return a session based client from the auth plugin provided. A session is obtained from the global session cache. :param auth: the auth plugin object to use in client creation. :returns: a keystone client ''' session = sessions.cache().get_session(sessions.SESSION_TYPE_KEYSTONE) if CONF.use_identity_api_v3: client_class = keystone_client_v3.Client else: client_class = keystone_client.Client return client_class(session=session, auth=auth) def project_id_from_auth(auth): '''Return the project id associated with an auth plugin. :param auth: the auth plugin to inspect. :returns: the project id associated with the auth plugin. ''' return auth.get_project_id( sessions.cache().get_session(sessions.SESSION_TYPE_KEYSTONE)) def service_catalog_from_auth(auth): '''Return the service catalog associated with an auth plugin. :param auth: the auth plugin to inspect. :returns: a list containing the service catalog. ''' access_info = auth.get_access( sessions.cache().get_session(sessions.SESSION_TYPE_KEYSTONE)) if access_info.has_service_catalog(): return access_info.service_catalog.catalog else: return [] def token_auth(token, project_id=None, project_name=None, project_domain_name='Default'): '''Return a token auth plugin object. :param token: the token to use for authentication. :param project_id: the project(ex. tenant) id to scope the auth. :returns: a token auth plugin object. ''' token_kwargs = dict( auth_url=base.retrieve_auth_url(CONF.keystone.endpoint_type), token=token ) if CONF.use_identity_api_v3: token_kwargs.update(dict( project_id=project_id, project_name=project_name, project_domain_name=project_domain_name, )) auth = keystone_identity.v3.Token(**token_kwargs) else: token_kwargs.update(dict( tenant_id=project_id, tenant_name=project_name, )) auth = keystone_identity.v2.Token(**token_kwargs) return auth def token_from_auth(auth): '''Return an authentication token from an auth plugin. :param auth: the auth plugin to acquire a token from. :returns: an auth token in string format. ''' return sessions.cache().token_for_auth(auth) def user_id_from_auth(auth): '''Return a user id associated with an auth plugin. :param auth: the auth plugin to inspect. :returns: a token associated with the auth. ''' return auth.get_user_id(sessions.cache().get_session( sessions.SESSION_TYPE_KEYSTONE)) def _password_auth(username, password, project_name=None, user_domain_name=None, project_domain_name=None, trust_id=None): '''Return a password auth plugin object. :param username: the user to authenticate as. :param password: the user's password. :param project_name: the project(ex. tenant) name to scope the auth. :param user_domain_name: the domain the user belongs to. :param project_domain_name: the domain the project belongs to. :param trust_id: a trust id to scope the auth. :returns: a password auth plugin object. ''' passwd_kwargs = dict( auth_url=CONF.trustee.auth_url, username=username, password=password ) if CONF.use_identity_api_v3: passwd_kwargs.update(dict( project_name=project_name, user_domain_name=user_domain_name, project_domain_name=project_domain_name, trust_id=trust_id )) auth = keystone_identity.v3.Password(**passwd_kwargs) else: passwd_kwargs.update(dict( tenant_name=project_name, trust_id=trust_id )) auth = keystone_identity.v2.Password(**passwd_kwargs) return auth sahara-12.0.0/sahara/utils/openstack/heat.py0000664000175000017500000000775313656752032020761 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from heatclient import client as heat_client from oslo_config import cfg from sahara import context from sahara import exceptions as ex from sahara.i18n import _ from sahara.service import sessions from sahara.utils.openstack import base from sahara.utils.openstack import keystone opts = [ cfg.BoolOpt('api_insecure', default=False, help='Allow to perform insecure SSL requests to heat.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for heat ' 'client requests.'), cfg.StrOpt("endpoint_type", default="internalURL", help="Endpoint type for heat client requests") ] heat_group = cfg.OptGroup(name='heat', title='Heat client options') CONF = cfg.CONF CONF.register_group(heat_group) CONF.register_opts(opts, group=heat_group) def client(): ctx = context.ctx() session = sessions.cache().get_heat_session() heat_url = base.url_for(ctx.service_catalog, 'orchestration', endpoint_type=CONF.heat.endpoint_type) return heat_client.Client( '1', endpoint=heat_url, session=session, auth=keystone.auth(), region_name=CONF.os_region_name) def get_stack(stack_name, raise_on_missing=True): for stack in base.execute_with_retries( client().stacks.list, show_hidden=True, filters={'name': stack_name}): return stack if not raise_on_missing: return None raise ex.NotFoundException({'stack': stack_name}, _('Failed to find stack %(stack)s')) def delete_stack(cluster): stack_name = cluster.stack_name base.execute_with_retries(client().stacks.delete, stack_name) stack = get_stack(stack_name, raise_on_missing=False) while stack is not None: # Valid states: IN_PROGRESS, empty and COMPLETE if stack.status in ['IN_PROGRESS', '', 'COMPLETE']: context.sleep(5) else: raise ex.HeatStackException( message=_( "Cannot delete heat stack {name}, reason: " "stack status: {status}, status reason: {reason}").format( name=stack_name, status=stack.status, reason=stack.stack_status_reason)) stack = get_stack(stack_name, raise_on_missing=False) def lazy_delete_stack(cluster): '''Attempt to delete stack once, but do not await successful deletion''' stack_name = cluster.stack_name base.execute_with_retries(client().stacks.delete, stack_name) def get_stack_outputs(cluster): stack = get_stack(cluster.stack_name) stack.get() return stack.outputs def _verify_completion(stack, is_update=False, last_update_time=None): # NOTE: expected empty status because status of stack # maybe is not set in heat database if stack.status in ['IN_PROGRESS', '']: return False if is_update and stack.status == 'COMPLETE': if stack.updated_time == last_update_time: return False return True def wait_stack_completion(cluster, is_update=False, last_updated_time=None): stack_name = cluster.stack_name stack = get_stack(stack_name) while not _verify_completion(stack, is_update, last_updated_time): context.sleep(1) stack = get_stack(stack_name) if stack.status != 'COMPLETE': raise ex.HeatStackException(stack.stack_status_reason) sahara-12.0.0/sahara/utils/openstack/swift.py0000664000175000017500000000705413656752032021166 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg import swiftclient from sahara import context from sahara.swift import swift_helper as sh from sahara.swift import utils as su from sahara.utils.openstack import base from sahara.utils.openstack import keystone as k opts = [ cfg.BoolOpt('api_insecure', default=False, help='Allow to perform insecure SSL requests to swift.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for swift ' 'client requests.'), cfg.StrOpt("endpoint_type", default="internalURL", help="Endpoint type for swift client requests") ] swift_group = cfg.OptGroup(name='swift', title='Swift client options') CONF = cfg.CONF CONF.register_group(swift_group) CONF.register_opts(opts, group=swift_group) def client(username, password, trust_id=None): '''return a Swift client This will return a Swift client for the specified username scoped to the current context project, unless a trust identifier is specified. If a trust identifier is present then the Swift client will be created based on a preauthorized token generated by the username scoped to the trust identifier. :param username: The username for the Swift client :param password: The password associated with the username :param trust_id: A trust identifier for scoping the username (optional) :returns: A Swift client object ''' if trust_id: proxyauth = k.auth_for_proxy(username, password, trust_id) return client_from_token(k.token_from_auth(proxyauth)) else: return swiftclient.Connection( auth_version='3', cacert=CONF.swift.ca_file, insecure=CONF.swift.api_insecure, authurl=su.retrieve_auth_url(CONF.keystone.endpoint_type), user=username, key=password, tenant_name=sh.retrieve_tenant(), retries=CONF.retries.retries_number, retry_on_ratelimit=True, starting_backoff=CONF.retries.retry_after, max_backoff=CONF.retries.retry_after) def client_from_token(token=None): if not token: token = context.get_auth_token() '''return a Swift client authenticated from a token.''' return swiftclient.Connection(auth_version='3', cacert=CONF.swift.ca_file, insecure=CONF.swift.api_insecure, preauthurl=base.url_for( service_type="object-store", endpoint_type=CONF.swift.endpoint_type), preauthtoken=token, retries=CONF.retries.retries_number, retry_on_ratelimit=True, starting_backoff=CONF.retries.retry_after, max_backoff=CONF.retries.retry_after) sahara-12.0.0/sahara/utils/openstack/images.py0000664000175000017500000001363313656752032021277 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import six from sahara.conductor import resource from sahara import exceptions as exc from sahara.utils.openstack import glance PROP_DESCR = '_sahara_description' PROP_USERNAME = '_sahara_username' PROP_TAG = '_sahara_tag_' PROP_ALL_TAGS = '_all_tags' def image_manager(): return SaharaImageManager() def wrap_entity(func): @functools.wraps(func) def handle(*args, **kwargs): res = func(*args, **kwargs) if isinstance(res, list): images = [] for image in res: image = _transform_image_props(image) images.append(resource.ImageResource(image)) return images else: res = _transform_image_props(res) return resource.ImageResource(res) return handle def _get_all_tags(image_props): tags = [] for key, value in image_props.iteritems(): if key.startswith(PROP_TAG) and value: tags.append(key) return tags def _get_meta_prop(image_props, prop, default=None): if PROP_ALL_TAGS == prop: return _get_all_tags(image_props) return image_props.get(prop, default) def _parse_tags(image_props): tags = _get_meta_prop(image_props, PROP_ALL_TAGS) return [t.replace(PROP_TAG, "") for t in tags] def _serialize_metadata(image): data = {} for key, value in image.iteritems(): if key.startswith('_sahara') and value: data[key] = value return data def _get_compat_values(image): data = {} # TODO(vgridnev): Drop these values from APIv2 data["OS-EXT-IMG-SIZE:size"] = image.size data['metadata'] = _serialize_metadata(image) data["minDisk"] = getattr(image, 'min_disk', 0) data["minRam"] = getattr(image, 'min_ram', 0) data["progress"] = getattr(image, 'progress', 100) data["status"] = image.status.upper() data['created'] = image.created_at data['updated'] = image.updated_at return data def _transform_image_props(image): data = _get_compat_values(image) data['username'] = _get_meta_prop(image, PROP_USERNAME, "") data['description'] = _get_meta_prop(image, PROP_DESCR, "") data['tags'] = _parse_tags(image) data['id'] = image.id data["name"] = image.name return data def _ensure_tags(tags): if not tags: return [] return [tags] if isinstance(tags, six.string_types) else tags class SaharaImageManager(object): """SaharaImageManager This class is intermediate layer between sahara and glanceclient.v2.images. It provides additional sahara properties for image such as description, image tags and image username. """ def __init__(self): self.client = glance.client().images @wrap_entity def get(self, image_id): image = self.client.get(image_id) return image @wrap_entity def find(self, **kwargs): images = self.client.list(**kwargs) num_matches = len(images) if num_matches == 0: raise exc.NotFoundException(kwargs, "No images matching %s.") elif num_matches > 1: raise exc.NoUniqueMatchException(response=images, query=kwargs) else: return images[0] @wrap_entity def list(self): return list(self.client.list()) def set_meta(self, image_id, meta): self.client.update(image_id, remove_props=None, **meta) def delete_meta(self, image_id, meta_list): self.client.update(image_id, remove_props=meta_list) def set_image_info(self, image_id, username, description=None): """Sets human-readable information for image. For example: Ubuntu 15 x64 with Java 1.7 and Apache Hadoop 2.1, ubuntu """ meta = {PROP_USERNAME: username} if description: meta[PROP_DESCR] = description self.set_meta(image_id, meta) def unset_image_info(self, image_id): """Unsets all Sahara-related information. It removes username, description and tags from the specified image. """ image = self.get(image_id) meta = [PROP_TAG + tag for tag in image.tags] if image.description is not None: meta += [PROP_DESCR] if image.username is not None: meta += [PROP_USERNAME] self.delete_meta(image_id, meta) def tag(self, image_id, tags): """Adds tags to the specified image.""" tags = _ensure_tags(tags) self.set_meta(image_id, {PROP_TAG + tag: 'True' for tag in tags}) def untag(self, image_id, tags): """Removes tags from the specified image.""" tags = _ensure_tags(tags) self.delete_meta(image_id, [PROP_TAG + tag for tag in tags]) def list_by_tags(self, tags): """Returns images having all of the specified tags.""" tags = _ensure_tags(tags) return [i for i in self.list() if set(tags).issubset(i.tags)] def list_registered(self, name=None, tags=None): tags = _ensure_tags(tags) images_list = [i for i in self.list() if i.username and set(tags).issubset(i.tags)] if name: return [i for i in images_list if i.name == name] else: return images_list def get_registered_image(self, image_id): img = self.get(image_id) if img.username: return img else: raise exc.ImageNotRegistered(image_id) sahara-12.0.0/sahara/utils/openstack/nova.py0000664000175000017500000000376313656752032021000 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from novaclient import client as nova_client from oslo_config import cfg from sahara.service import sessions import sahara.utils.openstack.base as base from sahara.utils.openstack import keystone opts = [ cfg.BoolOpt('api_insecure', default=False, help='Allow to perform insecure SSL requests to nova.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for nova ' 'client requests.'), cfg.StrOpt("endpoint_type", default="internalURL", help="Endpoint type for nova client requests") ] nova_group = cfg.OptGroup(name='nova', title='Nova client options') CONF = cfg.CONF CONF.register_group(nova_group) CONF.register_opts(opts, group=nova_group) def client(): session = sessions.cache().get_session(sessions.SESSION_TYPE_NOVA) nova = nova_client.Client('2', session=session, auth=keystone.auth(), endpoint_type=CONF.nova.endpoint_type, region_name=CONF.os_region_name) return nova def get_flavor(**kwargs): return base.execute_with_retries(client().flavors.find, **kwargs) def get_instance_info(instance): return base.execute_with_retries( client().servers.get, instance.instance_id) def get_keypair(keypair_name): return base.execute_with_retries( client().keypairs.get, keypair_name) sahara-12.0.0/sahara/utils/openstack/manila.py0000664000175000017500000000454213656752032021272 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import manilaclient.client as manila_client try: from manilaclient.common.apiclient import exceptions as manila_ex except ImportError: from manilaclient.openstack.common.apiclient import exceptions as manila_ex from oslo_config import cfg from sahara import context from sahara import exceptions as ex from sahara.i18n import _ from sahara.utils.openstack import base opts = [ cfg.StrOpt('api_version', default='1', help='Version of the manila API to use.'), cfg.BoolOpt('api_insecure', default=True, help='Allow to perform insecure SSL requests to manila.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for manila ' 'client requests.') ] manila_group = cfg.OptGroup(name='manila', title='Manila client options') CONF = cfg.CONF CONF.register_group(manila_group) CONF.register_opts(opts, group=manila_group) MANILA_PREFIX = "manila://" def client(): ctx = context.ctx() args = { 'username': ctx.username, 'project_name': ctx.tenant_name, 'project_id': ctx.tenant_id, 'input_auth_token': context.get_auth_token(), 'auth_url': base.retrieve_auth_url(), 'service_catalog_url': base.url_for(ctx.service_catalog, 'share'), 'ca_cert': CONF.manila.ca_file, 'insecure': CONF.manila.api_insecure } return manila_client.Client(CONF.manila.api_version, **args) def get_share(client_instance, share_id, raise_on_error=False): try: return client_instance.shares.get(share_id) except manila_ex.NotFound: if raise_on_error: raise ex.NotFoundException( share_id, _("Share with id %s was not found.")) else: return None sahara-12.0.0/sahara/utils/openstack/cinder.py0000664000175000017500000000677113656752032021303 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2013 Mirantis Inc. # Copyright (c) 2014 Adrien Vergé # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from cinderclient.v2 import client as cinder_client_v2 from cinderclient.v3 import client as cinder_client_v3 from keystoneauth1 import exceptions as keystone_exceptions from oslo_config import cfg from oslo_log import log as logging from sahara import context from sahara.service import sessions from sahara.utils.openstack import base from sahara.utils.openstack import keystone LOG = logging.getLogger(__name__) opts = [ cfg.IntOpt('api_version', default=3, help='Version of the Cinder API to use.', deprecated_name='cinder_api_version'), cfg.BoolOpt('api_insecure', default=False, help='Allow to perform insecure SSL requests to cinder.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for cinder ' 'client requests.'), cfg.StrOpt("endpoint_type", default="internalURL", help="Endpoint type for cinder client requests") ] cinder_group = cfg.OptGroup(name='cinder', title='Cinder client options') CONF = cfg.CONF CONF.register_group(cinder_group) CONF.register_opts(opts, group=cinder_group) def validate_config(): if CONF.cinder.api_version == 2: LOG.warning('The Cinder v2 API is deprecated. You should set ' 'cinder.api_version=3 in your sahara.conf file.') elif CONF.cinder.api_version != 3: LOG.warning('Unsupported Cinder API version: {bad}. Please set a ' 'correct value for cinder.api_version in your ' 'sahara.conf file (currently supported versions are: ' '{supported}). Falling back to Cinder API version 3.' .format(bad=CONF.cinder.api_version, supported=[2, 3])) CONF.set_override('api_version', 3, group='cinder') def client(): session = sessions.cache().get_session(sessions.SESSION_TYPE_CINDER) auth = keystone.auth() if CONF.cinder.api_version == 2: cinder = cinder_client_v2.Client( session=session, auth=auth, endpoint_type=CONF.cinder.endpoint_type, region_name=CONF.os_region_name) else: cinder = cinder_client_v3.Client( session=session, auth=auth, endpoint_type=CONF.cinder.endpoint_type, region_name=CONF.os_region_name) return cinder def check_cinder_exists(): if CONF.cinder.api_version == 2: service_type = 'volumev2' else: service_type = 'volumev3' try: base.url_for(context.current().service_catalog, service_type, endpoint_type=CONF.cinder.endpoint_type) return True except keystone_exceptions.EndpointNotFound: return False def get_volume(volume_id): return base.execute_with_retries(client().volumes.get, volume_id) sahara-12.0.0/sahara/utils/openstack/glance.py0000664000175000017500000000313013656752032021252 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from glanceclient import client as glance_client from oslo_config import cfg from sahara.service import sessions from sahara.utils.openstack import keystone opts = [ cfg.BoolOpt('api_insecure', default=False, help='Allow to perform insecure SSL requests to glance.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for glance ' 'client requests.'), cfg.StrOpt("endpoint_type", default="internalURL", help="Endpoint type for glance client requests"), ] glance_group = cfg.OptGroup(name='glance', title='Glance client options') CONF = cfg.CONF CONF.register_group(glance_group) CONF.register_opts(opts, group=glance_group) def client(): session = sessions.cache().get_session(sessions.SESSION_TYPE_GLANCE) glance = glance_client.Client('2', session=session, auth=keystone.auth(), interface=CONF.glance.endpoint_type) return glance sahara-12.0.0/sahara/utils/openstack/base.py0000664000175000017500000001025613656752032020742 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import re from keystoneauth1.access import service_catalog as keystone_service_catalog from keystoneauth1 import exceptions as keystone_ex from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils as json from six.moves.urllib import parse as urlparse from sahara import context from sahara import exceptions as ex LOG = logging.getLogger(__name__) # List of the errors, that can be retried ERRORS_TO_RETRY = [408, 413, 429, 500, 502, 503, 504] opts = [ cfg.IntOpt('retries_number', default=5, help='Number of times to retry the request to client before ' 'failing'), cfg.IntOpt('retry_after', default=10, help='Time between the retries to client (in seconds).') ] retries = cfg.OptGroup(name='retries', title='OpenStack clients calls retries') CONF = cfg.CONF CONF.register_group(retries) CONF.register_opts(opts, group=retries) def url_for(service_catalog=None, service_type='identity', endpoint_type="internalURL"): if not service_catalog: service_catalog = context.current().service_catalog try: return keystone_service_catalog.ServiceCatalogV2( json.loads(service_catalog)).url_for( service_type=service_type, interface=endpoint_type, region_name=CONF.os_region_name) except keystone_ex.EndpointNotFound: return keystone_service_catalog.ServiceCatalogV3( json.loads(service_catalog)).url_for( service_type=service_type, interface=endpoint_type, region_name=CONF.os_region_name) def prepare_auth_url(auth_url, version): info = urlparse.urlparse(auth_url) url_path = info.path.rstrip("/") # replacing current api version to empty string url_path = re.sub('/(v3/auth|v3|v2\.0)', '', url_path) url_path = (url_path + "/" + version).lstrip("/") return "%s://%s/%s" % (info[:2] + (url_path,)) def retrieve_auth_url(endpoint_type="internalURL", version=None): if not version: version = 'v3' if CONF.use_identity_api_v3 else 'v2.0' ctx = context.current() if ctx.service_catalog: auth_url = url_for(ctx.service_catalog, 'identity', endpoint_type) else: auth_url = CONF.trustee.auth_url return prepare_auth_url(auth_url, version) def execute_with_retries(method, *args, **kwargs): attempts = CONF.retries.retries_number + 1 while attempts > 0: try: return method(*args, **kwargs) except Exception as e: error_code = getattr(e, 'http_status', None) or getattr( e, 'status_code', None) or getattr(e, 'code', None) if error_code in ERRORS_TO_RETRY: LOG.warning('Occasional error occurred during "{method}" ' 'execution: {error_msg} ({error_code}). ' 'Operation will be retried.'.format( method=method.__name__, error_msg=e, error_code=error_code)) attempts -= 1 retry_after = getattr(e, 'retry_after', 0) context.sleep(max(retry_after, CONF.retries.retry_after)) else: LOG.debug('Permanent error occurred during "{method}" ' 'execution: {error_msg}.'.format( method=method.__name__, error_msg=e)) raise e else: attempts = CONF.retries.retries_number raise ex.MaxRetriesExceeded(attempts, method.__name__) sahara-12.0.0/sahara/utils/openstack/neutron.py0000664000175000017500000000744113656752032021524 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hortonworks, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from neutronclient.common import exceptions as n_ex from neutronclient.neutron import client as neutron_cli from oslo_config import cfg from oslo_log import log as logging from sahara import exceptions as ex from sahara.i18n import _ from sahara.service import sessions from sahara.utils.openstack import base from sahara.utils.openstack import keystone opts = [ cfg.BoolOpt('api_insecure', default=False, help='Allow to perform insecure SSL requests to neutron.'), cfg.StrOpt('ca_file', help='Location of ca certificates file to use for neutron ' 'client requests.'), cfg.StrOpt("endpoint_type", default="internalURL", help="Endpoint type for neutron client requests") ] neutron_group = cfg.OptGroup(name='neutron', title='Neutron client options') CONF = cfg.CONF CONF.register_group(neutron_group) CONF.register_opts(opts, group=neutron_group) LOG = logging.getLogger(__name__) def client(auth=None): if not auth: auth = keystone.auth() session = sessions.cache().get_session(sessions.SESSION_TYPE_NEUTRON) neutron = neutron_cli.Client('2.0', session=session, auth=auth, endpoint_type=CONF.neutron.endpoint_type, region_name=CONF.os_region_name) return neutron class NeutronClient(object): neutron = None routers = {} def __init__(self, network, token, tenant_name, auth=None): if not auth: auth = keystone.token_auth(token=token, project_name=tenant_name) self.neutron = client(auth) self.network = network def get_router(self): matching_router = NeutronClient.routers.get(self.network, None) if matching_router: LOG.debug('Returning cached qrouter') return matching_router['id'] routers = self.neutron.list_routers()['routers'] for router in routers: device_id = router['id'] ports = base.execute_with_retries( self.neutron.list_ports, device_id=device_id)['ports'] port = next((port for port in ports if port['network_id'] == self.network), None) if port: matching_router = router NeutronClient.routers[self.network] = matching_router break if not matching_router: raise ex.SystemError(_('Neutron router corresponding to network ' '%s is not found') % self.network) return matching_router['id'] def get_private_network_cidrs(cluster): neutron_client = client() private_net = base.execute_with_retries(neutron_client.show_network, cluster.neutron_management_network) cidrs = [] for subnet_id in private_net['network']['subnets']: subnet = base.execute_with_retries( neutron_client.show_subnet, subnet_id) cidrs.append(subnet['subnet']['cidr']) return cidrs def get_network(id): try: return base.execute_with_retries( client().find_resource_by_id, 'network', id) except n_ex.NotFound: return None sahara-12.0.0/sahara/cli/0000775000175000017500000000000013656752227015100 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/cli/__init__.py0000664000175000017500000000000013656752032017171 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/cli/sahara_all.py0000664000175000017500000000373113656752032017537 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.utils import patches patches.patch_all() import os import sys from oslo_log import log LOG = log.getLogger(__name__) # If ../sahara/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'sahara', '__init__.py')): sys.path.insert(0, possible_topdir) import sahara.main as server def main(): server.setup_common(possible_topdir, 'all-in-one') app = server.make_app() server.setup_sahara_api('all-in-one') server.setup_sahara_engine() server.setup_auth_policy() launcher = server.get_process_launcher() LOG.warning(""" __ __ _ \ \ / /_ _ _ __ _ __ (_)_ __ __ _ \ \ /\ / / _` | '__| '_ \| | '_ \ / _` | \ V V / (_| | | | | | | | | | | (_| | \_/\_/ \__,_|_| |_| |_|_|_| |_|\__, | |___/ Using the sahara-all entry point is now deprecated. Please use the sahara-api and sahara-engine entry points instead. """) server.launch_api_service( launcher, server.SaharaWSGIService("sahara-all", app)) sahara-12.0.0/sahara/cli/sahara_status.py0000664000175000017500000000277613656752032020322 0ustar zuulzuul00000000000000# Copyright (c) 2018 NEC, Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from oslo_config import cfg from oslo_upgradecheck import upgradecheck from sahara.i18n import _ CONF = cfg.CONF class Checks(upgradecheck.UpgradeCommands): """Contains upgrade checks Various upgrade checks should be added as separate methods in this class and added to _upgrade_checks tuple. """ def _sample_check(self): """This is sample check added to test the upgrade check framework It needs to be removed after adding any real upgrade check """ return upgradecheck.Result(upgradecheck.Code.SUCCESS, 'Sample detail') _upgrade_checks = ( # Sample check added for now. # Whereas in future real checks must be added here in tuple (_('Sample Check'), _sample_check), ) def main(): return upgradecheck.main( CONF, project='sahara', upgrade_command=Checks()) if __name__ == '__main__': sys.exit(main()) sahara-12.0.0/sahara/cli/sahara_engine.py0000664000175000017500000000306013656752032020227 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.utils import patches patches.patch_all() import os import sys # If ../sahara/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'sahara', '__init__.py')): sys.path.insert(0, possible_topdir) import sahara.main as server from sahara.service import ops def main(): server.setup_common(possible_topdir, 'engine') server.setup_sahara_engine() server.setup_sahara_api('distributed') ops_server = ops.OpsServer() launcher = server.get_process_launcher() service = ops_server.get_service() launcher.launch_service(service) service.start() launcher.wait() sahara-12.0.0/sahara/cli/image_pack/0000775000175000017500000000000013656752227017160 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/cli/image_pack/__init__.py0000664000175000017500000000000013656752032021251 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/cli/image_pack/api.py0000664000175000017500000001015613656752032020300 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara import conductor # noqa from sahara.plugins import base as plugins_base from sahara.utils import remote try: import guestfs except ImportError: raise Exception("The image packing API depends on the system package " "python-libguestfs (and libguestfs itself.) Please " "install these packages to proceed.") LOG = None CONF = None # This is broken out to support testability def set_logger(log): global LOG LOG = log # This is broken out to support testability def set_conf(conf): global CONF CONF = conf # This is a local exception class that is used to exit routines # in cases where error information has already been logged. # It is caught and suppressed everywhere it is used. class Handled(Exception): pass class Context(object): '''Create a pseudo Context object Since this tool does not use the REST interface, we do not have a request from which to build a Context. ''' def __init__(self, is_admin=False, tenant_id=None): self.is_admin = is_admin self.tenant_id = tenant_id class ImageRemote(remote.TerminalOnlyRemote): def __init__(self, image_path, root_drive): guest = guestfs.GuestFS(python_return_dict=True) guest.add_drive_opts(image_path, format="qcow2") guest.set_network(True) self.guest = guest self.root_drive = root_drive def __enter__(self): self.guest.launch() if not self.root_drive: self.root_drive = self.guest.inspect_os()[0] self.guest.mount(self.root_drive, '/') try: cmd = "echo Testing sudo without tty..." self.execute_command(cmd, run_as_root=True) except RuntimeError: cmd = "sed -i 's/requiretty/!requiretty/' /etc/sudoers" self.guest.execute_command(cmd) return self def __exit__(self, exc_type, exc_value, traceback): self.guest.sync() self.guest.umount_all() self.guest.close() def execute_command(self, cmd, run_as_root=False, get_stderr=False, raise_when_error=True, timeout=300): try: LOG.info("Issuing command: {cmd}".format(cmd=cmd)) stdout = self.guest.sh(cmd) LOG.info("Received response: {stdout}".format(stdout=stdout)) return 0, stdout except RuntimeError as ex: if raise_when_error: raise else: return 1, ex.message def get_os_distrib(self): return self.guest.inspect_get_distro(self.root_drive) def write_file_to(self, path, script, run_as_root): LOG.info("Writing script to : {path}".format(path=path)) stdout = self.guest.write(path, script) return 0, stdout def setup_plugins(): plugins_base.setup_plugins() def get_loaded_plugins(): return plugins_base.PLUGINS.plugins def get_plugin_arguments(plugin_name): """Gets plugin arguments, as a dict of version to argument list.""" plugin = plugins_base.PLUGINS.get_plugin(plugin_name) versions = plugin.get_versions() return {version: plugin.get_image_arguments(version) for version in versions} def pack_image(image_path, plugin_name, plugin_version, image_arguments, root_drive=None, test_only=False): with ImageRemote(image_path, root_drive) as image_remote: plugin = plugins_base.PLUGINS.get_plugin(plugin_name) plugin.pack_image(plugin_version, image_remote, test_only=test_only, image_arguments=image_arguments) sahara-12.0.0/sahara/cli/image_pack/cli.py0000664000175000017500000001036013656752032020273 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import sys from oslo_config import cfg from oslo_log import log import six from sahara.cli.image_pack import api from sahara.i18n import _ LOG = log.getLogger(__name__) CONF = cfg.CONF CONF.register_cli_opts([ cfg.StrOpt( 'image', required=True, help=_("The path to an image to modify. This image will be modified " "in-place: be sure to target a copy if you wish to maintain a " "clean master image.")), cfg.StrOpt( 'root-filesystem', dest='root_fs', required=False, help=_("The filesystem to mount as the root volume on the image. No " "value is required if only one filesystem is detected.")), cfg.BoolOpt( 'test-only', dest='test_only', default=False, help=_("If this flag is set, no changes will be made to the image; " "instead, the script will fail if discrepancies are found " "between the image and the intended state."))]) def unregister_extra_cli_opt(name): try: for cli in CONF._cli_opts: if cli['opt'].name == name: CONF.unregister_opt(cli['opt']) except Exception: pass for extra_opt in ["log-exchange", "host", "port"]: unregister_extra_cli_opt(extra_opt) def add_plugin_parsers(subparsers): api.setup_plugins() for plugin in api.get_loaded_plugins(): args_by_version = api.get_plugin_arguments(plugin) if all(args is NotImplemented for version, args in six.iteritems(args_by_version)): continue plugin_parser = subparsers.add_parser( plugin, help=_('Image generation for the {plugin} plugin').format( plugin=plugin)) version_parsers = plugin_parser.add_subparsers( title=_("Plugin version"), dest="version", help=_("Available versions")) for version, args in six.iteritems(args_by_version): if args is NotImplemented: continue version_parser = version_parsers.add_parser( version, help=_('{plugin} version {version}').format( plugin=plugin, version=version)) for arg in args: arg_token = ("--%s" % arg.name if len(arg.name) > 1 else "-%s" % arg.name) version_parser.add_argument(arg_token, dest=arg.name, help=arg.description, default=arg.default, required=arg.required, choices=arg.choices) version_parser.set_defaults(args={arg.name for arg in args}) command_opt = cfg.SubCommandOpt('plugin', title=_('Plugin'), help=_('Available plugins'), handler=add_plugin_parsers) CONF.register_cli_opt(command_opt) def main(): CONF(project='sahara') CONF.reload_config_files() log.setup(CONF, "sahara") LOG.info("Command: {command}".format(command=' '.join(sys.argv))) api.set_logger(LOG) api.set_conf(CONF) plugin = CONF.plugin.name version = CONF.plugin.version args = CONF.plugin.args image_arguments = {arg: getattr(CONF.plugin, arg) for arg in args} api.pack_image(CONF.image, plugin, version, image_arguments, CONF.root_fs, CONF.test_only) LOG.info("Finished packing image for {plugin} at version " "{version}".format(plugin=plugin, version=version)) sahara-12.0.0/sahara/cli/sahara_subprocess.py0000664000175000017500000000420013656752032021147 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import _io import pickle # nosec import sys import traceback from oslo_utils import reflection def main(): # NOTE(dmitryme): since we do not read stderr in the main process, # we need to flush it somewhere, otherwise both processes might # hang because of i/o buffer overflow. with open('/dev/null', 'w') as sys.stderr: while True: result = dict() try: # TODO(elmiko) these pickle usages should be # reinvestigated to determine a more secure manner to # deploy remote commands. if isinstance(sys.stdin, _io.TextIOWrapper): func = pickle.load(sys.stdin.buffer) # nosec args = pickle.load(sys.stdin.buffer) # nosec kwargs = pickle.load(sys.stdin.buffer) # nosec else: func = pickle.load(sys.stdin) # nosec args = pickle.load(sys.stdin) # nosec kwargs = pickle.load(sys.stdin) # nosec result['output'] = func(*args, **kwargs) except BaseException as e: cls_name = reflection.get_class_name(e, fully_qualified=False) result['exception'] = cls_name + ': ' + str(e) result['traceback'] = traceback.format_exc() if isinstance(sys.stdin, _io.TextIOWrapper): pickle.dump(result, sys.stdout.buffer, protocol=2) # nosec else: pickle.dump(result, sys.stdout, protocol=2) # nosec sys.stdout.flush() sahara-12.0.0/sahara/cli/sahara_api.py0000664000175000017500000000301113656752032017527 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys # If ../sahara/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'sahara', '__init__.py')): sys.path.insert(0, possible_topdir) import sahara.main as server def setup_api(): server.setup_common(possible_topdir, 'API') app = server.make_app() server.setup_sahara_api('distributed') server.setup_auth_policy() return app def main(): app = setup_api() launcher = server.get_process_launcher() api_service = server.SaharaWSGIService("sahara-api", app) server.launch_api_service(launcher, api_service) sahara-12.0.0/sahara/common/0000775000175000017500000000000013656752227015621 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/common/__init__.py0000664000175000017500000000000013656752032017712 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/common/policies/0000775000175000017500000000000013656752227017430 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/common/policies/job_binary_internals.py0000664000175000017500000000512613656752032024175 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_binary_internals_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY_INTERNALS % 'get', check_str=base.UNPROTECTED, description='Show job binary internal details.', operations=[{ 'path': '/v1.1/{project_id}/job-binary-internals/{job_bin_int_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY_INTERNALS % 'get_all', check_str=base.UNPROTECTED, description='List job binary internals.', operations=[{'path': '/v1.1/{project_id}/job-binary-internals', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY_INTERNALS % 'create', check_str=base.UNPROTECTED, description='Create job binary internals.', operations=[{'path': '/v1.1/{project_id}/job-binary-internals/{name}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY_INTERNALS % 'get_data', check_str=base.UNPROTECTED, description='Show job binary internal data.', operations=[{ 'path': '/v1.1/{project_id}/job-binary-internals/{job_bin_int_id}/data', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY_INTERNALS % 'modify', check_str=base.UNPROTECTED, description='Update job binary internal.', operations=[{ 'path': '/v1.1/{project_id}/job-binary-internals/{job_bin_int_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY_INTERNALS % 'delete', check_str=base.UNPROTECTED, description='Delete job binary internals.', operations=[{ 'path': '/v1.1/{project_id}/job-binary-internals/{job_bin_int_id}', 'method': 'DELETE'}]), ] def list_rules(): return job_binary_internals_policies sahara-12.0.0/sahara/common/policies/__init__.py0000664000175000017500000000450713656752032021541 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from sahara.common.policies import base from sahara.common.policies import cluster from sahara.common.policies import cluster_template from sahara.common.policies import cluster_templates from sahara.common.policies import clusters from sahara.common.policies import data_source from sahara.common.policies import data_sources from sahara.common.policies import image from sahara.common.policies import images from sahara.common.policies import job from sahara.common.policies import job_binaries from sahara.common.policies import job_binary from sahara.common.policies import job_binary_internals from sahara.common.policies import job_executions from sahara.common.policies import job_template from sahara.common.policies import job_type from sahara.common.policies import job_types from sahara.common.policies import jobs from sahara.common.policies import node_group_template from sahara.common.policies import node_group_templates from sahara.common.policies import plugin from sahara.common.policies import plugins def list_rules(): return itertools.chain( base.list_rules(), clusters.list_rules(), cluster_templates.list_rules(), data_sources.list_rules(), images.list_rules(), job_binaries.list_rules(), job_binary_internals.list_rules(), job_executions.list_rules(), job_types.list_rules(), jobs.list_rules(), node_group_templates.list_rules(), plugins.list_rules(), cluster.list_rules(), cluster_template.list_rules(), data_source.list_rules(), image.list_rules(), job_binary.list_rules(), job_type.list_rules(), job.list_rules(), node_group_template.list_rules(), plugin.list_rules(), job_template.list_rules() ) sahara-12.0.0/sahara/common/policies/cluster.py0000664000175000017500000000432613656752032021462 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base clusters_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER % 'scale', check_str=base.UNPROTECTED, description='Scale cluster.', operations=[{'path': '/v2/clusters/{cluster_id}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER % 'list', check_str=base.UNPROTECTED, description='List available clusters', operations=[{'path': '/v2/clusters', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER % 'create', check_str=base.UNPROTECTED, description='Create cluster.', operations=[{'path': '/v2/clusters', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER % 'get', check_str=base.UNPROTECTED, description='Show details of a cluster.', operations=[{'path': '/v2/clusters/{cluster_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER % 'update', check_str=base.UNPROTECTED, description='Updates a cluster.', operations=[{'path': '/v2/clusters/{cluster_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER % 'delete', check_str=base.UNPROTECTED, description='Delete a cluster.', operations=[{'path': '/v2/clusters/{cluster_id}', 'method': 'DELETE'}]), ] def list_rules(): return clusters_policies sahara-12.0.0/sahara/common/policies/data_source.py0000664000175000017500000000404013656752032022263 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base data_sources_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCE % 'list', check_str=base.UNPROTECTED, description='List data sources.', operations=[{'path': '/v2/data-sources', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCE % 'get', check_str=base.UNPROTECTED, description='Show data source details.', operations=[ {'path': '/v2/data-sources/{data_source_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCE % 'register', check_str=base.UNPROTECTED, description='Create data source.', operations=[{'path': '/v2/data-sources', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCE % 'update', check_str=base.UNPROTECTED, description='Update data source.', operations=[ {'path': '/v2/data-sources/{data_source_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCE % 'delete', check_str=base.UNPROTECTED, description='Delete data source.', operations=[ {'path': '/v2/data-sources/{data_source_id}', 'method': 'DELETE'}]), ] def list_rules(): return data_sources_policies sahara-12.0.0/sahara/common/policies/images.py0000664000175000017500000000443013656752032021242 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base images_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGES % 'add_tags', check_str=base.UNPROTECTED, description='Add tags to image.', operations=[{'path': '/v1.1/{project_id}/images/{image_id}/tag', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGES % 'register', check_str=base.UNPROTECTED, description='Register image.', operations=[{'path': '/v1.1/{project_id}/images/{image_id}', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGES % 'get_all', check_str=base.UNPROTECTED, description='List images.', operations=[{'path': '/v1.1/{project_id}/images', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGES % 'unregister', check_str=base.UNPROTECTED, description='Unregister image.', operations=[{'path': '/v1.1/{project_id}/images/{image_id}', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGES % 'get', check_str=base.UNPROTECTED, description='Show image details.', operations=[{'path': '/v1.1/{project_id}/images/{image_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGES % 'remove_tags', check_str=base.UNPROTECTED, description='Remove tags from image.', operations=[{'path': '/v1.1/{project_id}/images/{image_id}/untag', 'method': 'POST'}]), ] def list_rules(): return images_policies sahara-12.0.0/sahara/common/policies/job.py0000664000175000017500000000340313656752032020546 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB % 'execute', check_str=base.UNPROTECTED, description='Run job.', operations=[{'path': '/v2/jobs', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB % 'get', check_str=base.UNPROTECTED, description='Show jobs details.', operations=[{'path': '/v2/jobs/{job_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB % 'update', check_str=base.UNPROTECTED, description='Update job.', operations=[{'path': '/v2/jobs/{job_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB % 'list', check_str=base.UNPROTECTED, description='List jobs.', operations=[{'path': '/v2/jobs', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB % 'delete', check_str=base.UNPROTECTED, description='Delete job.', operations=[{'path': '/v2/jobs/{job_id}', 'method': 'DELETE'}]), ] def list_rules(): return job_policies sahara-12.0.0/sahara/common/policies/job_binary.py0000664000175000017500000000445313656752032022120 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_binaries_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY % 'list', check_str=base.UNPROTECTED, description='List job binaries.', operations=[{'path': '/v2/job-binaries', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY % 'create', check_str=base.UNPROTECTED, description='Create job binary.', operations=[{'path': '/v2/job-binaries', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY % 'get-data', check_str=base.UNPROTECTED, description='Show job binary data.', operations=[ {'path': '/v2/job-binaries/{job_binary_id}/data', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY % 'update', check_str=base.UNPROTECTED, description='Update job binary.', operations=[ {'path': '/v2/job-binaries/{job_binary_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY % 'get', check_str=base.UNPROTECTED, description='Show job binary details.', operations=[{'path': '/v2/job-binaries/{job_binary_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARY % 'delete', check_str=base.UNPROTECTED, description='Delete job binary.', operations=[{'path': '/v2/job-binaries/{job_binary_id}', 'method': 'DELETE'}]), ] def list_rules(): return job_binaries_policies sahara-12.0.0/sahara/common/policies/job_types.py0000664000175000017500000000170313656752032021773 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_types_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TYPES % 'get_all', check_str=base.UNPROTECTED, description='List job types.', operations=[{'path': '/v1.1/{project_id}/job-types', 'method': 'GET'}]), ] def list_rules(): return job_types_policies sahara-12.0.0/sahara/common/policies/jobs.py0000664000175000017500000000473613656752032020743 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base jobs_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOBS % 'execute', check_str=base.UNPROTECTED, description='Run job.', operations=[{'path': '/v1.1/{project_id}/jobs/{job_id}/execute', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOBS % 'get', check_str=base.UNPROTECTED, description='Show job details.', operations=[{'path': '/v1.1/{project_id}/jobs/{job_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOBS % 'create', check_str=base.UNPROTECTED, description='Create job.', operations=[{'path': '/v1.1/{project_id}/jobs', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOBS % 'get_all', check_str=base.UNPROTECTED, description='List jobs.', operations=[{'path': '/v1.1/{project_id}/jobs', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOBS % 'modify', check_str=base.UNPROTECTED, description='Update job object.', operations=[{'path': '/v1.1/{project_id}/jobs/{job_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOBS % 'get_config_hints', check_str=base.UNPROTECTED, description='Get job config hints.', operations=[ {'path': '/v1.1/{project_id}/jobs/get_config_hints/{job_type}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOBS % 'delete', check_str=base.UNPROTECTED, description='Remove job.', operations=[{'path': '/v1.1/{project_id}/jobs/{job_id}', 'method': 'DELETE'}]), ] def list_rules(): return jobs_policies sahara-12.0.0/sahara/common/policies/job_executions.py0000664000175000017500000000477713656752032023033 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_executions_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_EXECUTIONS % 'get', check_str=base.UNPROTECTED, description='Show job executions details.', operations=[{'path': '/v1.1/{project_id}/job-executions/{job_exec_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_EXECUTIONS % 'modify', check_str=base.UNPROTECTED, description='Update job execution.', operations=[{'path': '/v1.1/{project_id}/job-executions/{job_exec_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_EXECUTIONS % 'get_all', check_str=base.UNPROTECTED, description='List job executions.', operations=[{'path': '/v1.1/{project_id}/job-executions', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_EXECUTIONS % 'refresh_status', check_str=base.UNPROTECTED, description='Refresh job execution status.', operations=[ {'path': '/v1.1/{project_id}/job-executions/{job_exec_id}/refresh-status', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_EXECUTIONS % 'cancel', check_str=base.UNPROTECTED, description='Cancel job execution.', operations=[{'path': '/v1.1/{project_id}/job-executions/{job_exec_id}/cancel', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_EXECUTIONS % 'delete', check_str=base.UNPROTECTED, description='Delete job execution.', operations=[{'path': '/v1.1/{project_id}/job-executions/{job_exec_id}', 'method': 'DELETE'}]), ] def list_rules(): return job_executions_policies sahara-12.0.0/sahara/common/policies/node_group_template.py0000664000175000017500000000426213656752032024034 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base node_group_templates_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATE % 'list', check_str=base.UNPROTECTED, description='List node group templates.', operations=[{'path': '/v2/node-group-templates', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATE % 'create', check_str=base.UNPROTECTED, description='Create node group template.', operations=[{'path': '/v2/node-group-templates', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATE % 'get', check_str=base.UNPROTECTED, description='Show node group template details.', operations=[ {'path': '/v2/node-group-templates/{node_group_temp_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATE % 'update', check_str=base.UNPROTECTED, description='Update node group template.', operations=[ {'path': '/v2/node-group-templates/{node_group_temp_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATE % 'delete', check_str=base.UNPROTECTED, description='Delete node group template.', operations=[ {'path': '/v2/node-group-templates/{node_group_temp_id}', 'method': 'DELETE'}]), ] def list_rules(): return node_group_templates_policies sahara-12.0.0/sahara/common/policies/cluster_template.py0000664000175000017500000000417013656752032023352 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base cluster_templates_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATE % 'create', check_str=base.UNPROTECTED, description='Create cluster template.', operations=[{'path': '/v2/cluster-templates', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATE % 'delete', check_str=base.UNPROTECTED, description='Delete a cluster template.', operations=[ {'path': '/v2/cluster-templates/{cluster_temp_id}', 'method': 'DELETE'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATE % 'update', check_str=base.UNPROTECTED, description='Update cluster template.', operations=[ {'path': '/v2/cluster-templates/{cluster_temp_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATE % 'get', check_str=base.UNPROTECTED, description='Show cluster template details.', operations=[ {'path': '/v2/cluster-templates/{cluster_temp_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATE % 'list', check_str=base.UNPROTECTED, description='List cluster templates.', operations=[{'path': '/v2/cluster-templates', 'method': 'GET'}]), ] def list_rules(): return cluster_templates_policies sahara-12.0.0/sahara/common/policies/image.py0000664000175000017500000000470613656752032021065 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base images_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGE % 'set-tags', check_str=base.UNPROTECTED, description='Add tags to image.', operations=[{'path': '/v2/images/{image_id}/tags', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGE % 'register', check_str=base.UNPROTECTED, description='Register image.', operations=[{'path': '/v2/images/{image_id}', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGE % 'list', check_str=base.UNPROTECTED, description='List images.', operations=[{'path': '/v2/images', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGE % 'unregister', check_str=base.UNPROTECTED, description='Unregister image.', operations=[{'path': '/v2/images/{image_id}', 'method': 'DELETE'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGE % 'get', check_str=base.UNPROTECTED, description='Show image details.', operations=[{'path': '/v2/images/{image_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGE % 'remove-tags', check_str=base.UNPROTECTED, description='Remove tags from image.', operations=[{'path': '/v2/images/{image_id}/tags', 'method': 'DELETE'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_IMAGE % 'get-tags', check_str=base.UNPROTECTED, description='List tags on an image.', operations=[{'path': '/v2/images/{image_id}/tags', 'method': 'GET'}]), ] def list_rules(): return images_policies sahara-12.0.0/sahara/common/policies/node_group_templates.py0000664000175000017500000000445213656752032024220 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base node_group_templates_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATES % 'get_all', check_str=base.UNPROTECTED, description='List node group templates.', operations=[{'path': '/v1.1/{project_id}/node-group-templates', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATES % 'create', check_str=base.UNPROTECTED, description='Create node group template.', operations=[{'path': '/v1.1/{project_id}/node-group-templates', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATES % 'get', check_str=base.UNPROTECTED, description='Show node group template details.', operations=[ {'path': '/v1.1/{project_id}/node-group-templates/{node_group_temp_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATES % 'modify', check_str=base.UNPROTECTED, description='Update node group template.', operations=[ {'path': '/v1.1/{project_id}/node-group-templates/{node_group_temp_id}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_NODE_GROUP_TEMPLATES % 'delete', check_str=base.UNPROTECTED, description='Delete node group template.', operations=[ {'path': '/v1.1/{project_id}/node-group-templates/{node_group_temp_id}', 'method': 'DELETE'}]), ] def list_rules(): return node_group_templates_policies sahara-12.0.0/sahara/common/policies/clusters.py0000664000175000017500000000447013656752032021645 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base clusters_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTERS % 'scale', check_str=base.UNPROTECTED, description='Scale cluster.', operations=[{'path': '/v1.1/{project_id}/clusters/{cluster_id}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTERS % 'get_all', check_str=base.UNPROTECTED, description='List available clusters', operations=[{'path': '/v1.1/{project_id}/clusters', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTERS % 'create', check_str=base.UNPROTECTED, description='Create cluster.', operations=[{'path': '/v1.1/{project_id}/clusters', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTERS % 'get', check_str=base.UNPROTECTED, description='Show details of a cluster.', operations=[{'path': '/v1.1/{project_id}/clusters/{cluster_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTERS % 'modify', check_str=base.UNPROTECTED, description='Modify a cluster.', operations=[{'path': '/v1.1/{project_id}/clusters/{cluster_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTERS % 'delete', check_str=base.UNPROTECTED, description='Delete a cluster.', operations=[{'path': '/v1.1/{project_id}/clusters/{cluster_id}', 'method': 'DELETE'}]), ] def list_rules(): return clusters_policies sahara-12.0.0/sahara/common/policies/cluster_templates.py0000664000175000017500000000431113656752032023532 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base cluster_templates_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATES % 'create', check_str=base.UNPROTECTED, description='Create cluster template.', operations=[{'path': '/v1.1/{project_id}/cluster-templates', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATES % 'delete', check_str=base.UNPROTECTED, description='Delete a cluster template.', operations=[ {'path': '/v1.1/{project_id}/cluster-templates/{cluster_temp_id}', 'method': 'DELETE'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATES % 'modify', check_str=base.UNPROTECTED, description='Update cluster template.', operations=[ {'path': '/v1.1/{project_id}/cluster-templates/{cluster_temp_id}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATES % 'get', check_str=base.UNPROTECTED, description='Show cluster template details.', operations=[ {'path': '/v1.1/{project_id}/cluster-templates/{cluster_temp_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_CLUSTER_TEMPLATES % 'get_all', check_str=base.UNPROTECTED, description='List cluster templates.', operations=[{'path': '/v1.1/{project_id}/cluster-templates', 'method': 'GET'}]), ] def list_rules(): return cluster_templates_policies sahara-12.0.0/sahara/common/policies/plugin.py0000664000175000017500000000334113656752032021273 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base plugins_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGIN % 'list', check_str=base.UNPROTECTED, description='List plugins.', operations=[{'path': '/v2/plugins', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGIN % 'get-version', check_str=base.UNPROTECTED, description='Show plugins version details.', operations=[ {'path': '/v2/plugins/{plugin_name}/{version}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGIN % 'get', check_str=base.UNPROTECTED, description='Show plugin details.', operations=[{'path': '/v2/plugins/{plugin_name}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGIN % 'update', check_str=base.ROLE_ADMIN, description='Update plugin details.', operations=[{'path': '/v2/plugins/{plugin_name}', 'method': 'PATCH'}]), ] def list_rules(): return plugins_policies sahara-12.0.0/sahara/common/policies/plugins.py0000664000175000017500000000422513656752032021460 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base plugins_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGINS % 'get_all', check_str=base.UNPROTECTED, description='List plugins.', operations=[{'path': '/v1.1/{project_id}/plugins', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGINS % 'get_version', check_str=base.UNPROTECTED, description='Show plugins version details.', operations=[ {'path': '/v1.1/{project_id}/plugins/{plugin_name}/{version}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGINS % 'get', check_str=base.UNPROTECTED, description='Show plugin details.', operations=[{'path': '/v1.1/{project_id}/plugins/{plugin_name}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGINS % 'convert_config', check_str=base.UNPROTECTED, description='Convert plugins to cluster template', operations=[ {'path': ('/v1.1/{project_id}/plugins/{plugin_name}/' '{version}/convert-config/{name}'), 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_PLUGINS % 'patch', check_str=base.ROLE_ADMIN, description='Update plugin details.', operations=[{'path': '/v1.1/{project_id}/plugins/{plugin_name}', 'method': 'PATCH'}]), ] def list_rules(): return plugins_policies sahara-12.0.0/sahara/common/policies/job_binaries.py0000664000175000017500000000462213656752032022426 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_binaries_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARIES % 'get_all', check_str=base.UNPROTECTED, description='List job binaries.', operations=[{'path': '/v1.1/{project_id}/job-binaries', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARIES % 'create', check_str=base.UNPROTECTED, description='Create job binary.', operations=[{'path': '/v1.1/{project_id}/job-binaries', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARIES % 'get_data', check_str=base.UNPROTECTED, description='Show job binary data.', operations=[ {'path': '/v1.1/{project_id}/job-binaries/{job-binary_id}/data', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARIES % 'modify', check_str=base.UNPROTECTED, description='Update job binary.', operations=[ {'path': '/v1.1/{project_id}/job-binaries/{job-binary_id}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARIES % 'get', check_str=base.UNPROTECTED, description='Show job binary details.', operations=[{'path': '/v1.1/{project_id}/job-binaries/{job_binary_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_BINARIES % 'delete', check_str=base.UNPROTECTED, description='Delete job binary.', operations=[{'path': '/v1.1/{project_id}/job-binaries/{job_binary_id}', 'method': 'DELETE'}]), ] def list_rules(): return job_binaries_policies sahara-12.0.0/sahara/common/policies/data_sources.py0000664000175000017500000000416113656752032022452 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base data_sources_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCES % 'get_all', check_str=base.UNPROTECTED, description='List data sources.', operations=[{'path': '/v1.1/{project_id}/data-sources', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCES % 'get', check_str=base.UNPROTECTED, description='Show data source details.', operations=[ {'path': '/v1.1/{project_id}/data-sources/{data_source_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCES % 'register', check_str=base.UNPROTECTED, description='Create data source.', operations=[{'path': '/v1.1/{project_id}/data-sources', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCES % 'modify', check_str=base.UNPROTECTED, description='Update data source.', operations=[ {'path': '/v1.1/{project_id}/data-sources/{data_source_id}', 'method': 'PUT'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_DATA_SOURCES % 'delete', check_str=base.UNPROTECTED, description='Delete data source.', operations=[ {'path': '/v1.1/{project_id}/data-sources/{data_source_id}', 'method': 'DELETE'}]), ] def list_rules(): return data_sources_policies sahara-12.0.0/sahara/common/policies/base.py0000664000175000017500000000417713656752032020717 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy DATA_PROCESSING = 'data-processing:%s' DATA_PROCESSING_CLUSTERS = DATA_PROCESSING % 'clusters:%s' DATA_PROCESSING_CLUSTER_TEMPLATES = DATA_PROCESSING % 'cluster-templates:%s' DATA_PROCESSING_DATA_SOURCES = DATA_PROCESSING % 'data-sources:%s' DATA_PROCESSING_IMAGES = DATA_PROCESSING % 'images:%s' DATA_PROCESSING_JOB_BINARIES = DATA_PROCESSING % 'job-binaries:%s' DATA_PROCESSING_JOB_EXECUTIONS = DATA_PROCESSING % 'job-executions:%s' DATA_PROCESSING_JOB_TYPES = DATA_PROCESSING % 'job-types:%s' DATA_PROCESSING_JOBS = DATA_PROCESSING % 'jobs:%s' DATA_PROCESSING_PLUGINS = DATA_PROCESSING % 'plugins:%s' DATA_PROCESSING_NODE_GROUP_TEMPLATES = ( DATA_PROCESSING % 'node-group-templates:%s') DATA_PROCESSING_JOB_BINARY_INTERNALS = ( DATA_PROCESSING % 'job-binary-internals:%s') DATA_PROCESSING_CLUSTER = DATA_PROCESSING % 'cluster:%s' DATA_PROCESSING_CLUSTER_TEMPLATE = DATA_PROCESSING % 'cluster-template:%s' DATA_PROCESSING_DATA_SOURCE = DATA_PROCESSING % 'data-source:%s' DATA_PROCESSING_IMAGE = DATA_PROCESSING % 'image:%s' DATA_PROCESSING_JOB_BINARY = DATA_PROCESSING % 'job-binary:%s' DATA_PROCESSING_JOB_TEMPLATE = DATA_PROCESSING % 'job-template:%s' DATA_PROCESSING_JOB_TYPE = DATA_PROCESSING % 'job-type:%s' DATA_PROCESSING_JOB = DATA_PROCESSING % 'job:%s' DATA_PROCESSING_PLUGIN = DATA_PROCESSING % 'plugin:%s' DATA_PROCESSING_NODE_GROUP_TEMPLATE = ( DATA_PROCESSING % 'node-group-template:%s') UNPROTECTED = '' ROLE_ADMIN = 'role:admin' rules = [ policy.RuleDefault( name='context_is_admin', check_str=ROLE_ADMIN), ] def list_rules(): return rules sahara-12.0.0/sahara/common/policies/job_type.py0000664000175000017500000000166013656752032021612 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_types_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TYPE % 'list', check_str=base.UNPROTECTED, description='List job types.', operations=[{'path': '/v2/job-types', 'method': 'GET'}]), ] def list_rules(): return job_types_policies sahara-12.0.0/sahara/common/policies/job_template.py0000664000175000017500000000452113656752032022443 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_policy import policy from sahara.common.policies import base job_templates_policies = [ policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TEMPLATE % 'get', check_str=base.UNPROTECTED, description='Show job template details.', operations=[{'path': '/v2/job-templates/{job_temp_id}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TEMPLATE % 'create', check_str=base.UNPROTECTED, description='Create job templates.', operations=[{'path': '/v2/job-templates', 'method': 'POST'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TEMPLATE % 'list', check_str=base.UNPROTECTED, description='List job templates.', operations=[{'path': '/v2/job-templates', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TEMPLATE % 'update', check_str=base.UNPROTECTED, description='Update job template.', operations=[{'path': '/v2/job-templates/{job_temp_id}', 'method': 'PATCH'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TEMPLATE % 'get-config-hints', check_str=base.UNPROTECTED, description='Get job template config hints.', operations=[ {'path': '/v2/job-templates/config-hints/{job_type}', 'method': 'GET'}]), policy.DocumentedRuleDefault( name=base.DATA_PROCESSING_JOB_TEMPLATE % 'delete', check_str=base.UNPROTECTED, description='Remove job template.', operations=[{'path': '/v2/job-templates/{job_temp_id}', 'method': 'DELETE'}]), ] def list_rules(): return job_templates_policies sahara-12.0.0/sahara/common/config.py0000664000175000017500000000300313656752032017426 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hewlett Packard Enterprise Development Corporation, LP # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_middleware import cors def set_config_defaults(): """This method updates all configuration default values.""" set_cors_middleware_defaults() def set_cors_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'], expose_headers=['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) sahara-12.0.0/sahara/config.py0000664000175000017500000002162513656752032016150 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools # loading keystonemiddleware opts because sahara uses these options in code from keystonemiddleware import opts # noqa from oslo_config import cfg from oslo_log import log from sahara import exceptions as ex from sahara.i18n import _ from sahara.plugins import opts as plugins_base from sahara.service.castellan import config as castellan from sahara.service.edp.data_sources import opts as data_source from sahara.service.edp.job_binaries import opts as job_binary from sahara.topology import topology_helper from sahara.utils.notification import sender from sahara.utils.openstack import cinder from sahara.utils.openstack import keystone from sahara.utils import remote from sahara import version cli_opts = [ cfg.HostAddressOpt('host', default='0.0.0.0', help='Hostname or IP address that will be used ' 'to listen on.'), cfg.PortOpt('port', default=8386, help='Port that will be used to listen on.'), cfg.BoolOpt('log-exchange', default=False, help='Log request/response exchange details: environ, ' 'headers and bodies.') ] edp_opts = [ cfg.IntOpt('job_binary_max_KB', default=5120, help='Maximum length of job binary data in kilobytes that ' 'may be stored or retrieved in a single operation.'), cfg.IntOpt('job_canceling_timeout', default=300, help='Timeout for canceling job execution (in seconds). ' 'Sahara will try to cancel job execution during ' 'this time.'), cfg.BoolOpt('edp_internal_db_enabled', default=True, help='Use Sahara internal db to store job binaries.') ] db_opts = [ cfg.StrOpt('db_driver', default='sahara.db', help='Driver to use for database access.') ] networking_opts = [ cfg.BoolOpt('use_floating_ips', default=True, help='If set to True, Sahara will use floating IPs to ' 'communicate with instances. To make sure that all ' 'instances have floating IPs assigned, make sure ' 'that all Node Groups have "floating_ip_pool" ' 'parameter defined.'), cfg.StrOpt('node_domain', default='novalocal', help="The suffix of the node's FQDN."), cfg.BoolOpt('use_namespaces', default=False, help="Use network namespaces for communication."), cfg.BoolOpt('use_rootwrap', default=False, help="Use rootwrap facility to allow non-root users to run " "the sahara services and access private network IPs " "(only valid to use in conjunction with " "use_namespaces=True)"), cfg.StrOpt('rootwrap_command', default='sudo sahara-rootwrap /etc/sahara/rootwrap.conf', help="Rootwrap command to leverage. Use in conjunction with " "use_rootwrap=True") ] dns_opts = [ cfg.BoolOpt('use_designate', default=False, help='Use Designate for internal and external hostnames ' 'resolution'), cfg.ListOpt('nameservers', default=[], help="IP addresses of Designate nameservers. " "This is required if 'use_designate' is True") ] accessible_ip_opts = [ cfg.IPOpt('identity_ip_accessible', default=None, help='IP address of Keystone endpoint, accessible by tenant' ' machines. If not set, the results of the DNS lookup' ' performed where Sahara services are running will be' ' used.'), cfg.IPOpt('object_store_ip_accessible', default=None, help='IP address of Swift endpoint, accessible by tenant' ' machines. If not set, the results of the DNS lookup' ' performed where Sahara services are running will be' ' used.'), ] CONF = cfg.CONF CONF.register_cli_opts(cli_opts) CONF.register_opts(networking_opts) CONF.register_opts(edp_opts) CONF.register_opts(db_opts) CONF.register_opts(dns_opts) CONF.register_opts(accessible_ip_opts) log.register_options(CONF) sahara_default_log_levels = [ 'stevedore=INFO', 'eventlet.wsgi.server=WARN', 'paramiko=WARN', 'requests=WARN', 'neutronclient=INFO', ] log.set_defaults( default_log_levels=log.get_default_log_levels()+sahara_default_log_levels) def list_opts(): # NOTE (vgridnev): we make these import here to avoid problems # with importing unregistered options in sahara code. # As example, importing 'node_domain' in # sahara/conductor/objects.py from sahara.conductor import api from sahara import main as sahara_main from sahara.service import coordinator from sahara.service.edp import job_utils from sahara.service.heat import heat_engine from sahara.service.heat import templates from sahara.service import ntp_service from sahara.service import periodic from sahara.swift import swift_helper from sahara.utils import cluster_progress_ops as cpo from sahara.utils.openstack import base from sahara.utils.openstack import glance from sahara.utils.openstack import heat from sahara.utils.openstack import manila from sahara.utils.openstack import neutron from sahara.utils.openstack import nova from sahara.utils.openstack import swift from sahara.utils import poll_utils from sahara.utils import proxy from sahara.utils import ssh_remote return [ (None, itertools.chain(cli_opts, edp_opts, networking_opts, dns_opts, db_opts, accessible_ip_opts, plugins_base.opts, topology_helper.opts, keystone.opts, remote.ssh_opts, sahara_main.opts, job_utils.opts, periodic.periodic_opts, coordinator.coordinator_opts, ntp_service.ntp_opts, proxy.opts, cpo.event_log_opts, base.opts, heat_engine.heat_engine_opts, templates.heat_engine_opts, ssh_remote.ssh_config_options, castellan.opts, data_source.opts, job_binary.opts)), (poll_utils.timeouts.name, itertools.chain(poll_utils.timeouts_opts)), (api.conductor_group.name, itertools.chain(api.conductor_opts)), (cinder.cinder_group.name, itertools.chain(cinder.opts)), (glance.glance_group.name, itertools.chain(glance.opts)), (heat.heat_group.name, itertools.chain(heat.opts)), (manila.manila_group.name, itertools.chain(manila.opts)), (neutron.neutron_group.name, itertools.chain(neutron.opts)), (nova.nova_group.name, itertools.chain(nova.opts)), (swift.swift_group.name, itertools.chain(swift.opts)), (keystone.keystone_group.name, itertools.chain(keystone.ssl_opts)), (keystone.trustee_group.name, itertools.chain(keystone.trustee_opts)), (base.retries.name, itertools.chain(base.opts)), (swift_helper.public_endpoint_cert_group.name, itertools.chain(swift_helper.opts)), (castellan.castellan_group.name, itertools.chain(castellan.castellan_opts)), (sender.notifier_opts_group, sender.notifier_opts) ] def parse_configs(conf_files=None): try: version_string = version.version_info.version_string() CONF(project='sahara', version=version_string, default_config_files=conf_files) except cfg.RequiredOptError as roe: raise ex.ConfigurationError( _("Option '%(option)s' is required for config group '%(group)s'") % {'option': roe.opt_name, 'group': roe.group.name}) sahara-12.0.0/sahara/tests/0000775000175000017500000000000013656752227015473 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/__init__.py0000664000175000017500000000117413656752032017601 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.utils import patches patches.patch_all() sahara-12.0.0/sahara/tests/unit/0000775000175000017500000000000013656752227016452 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/__init__.py0000664000175000017500000000000013656752032020543 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/utils/0000775000175000017500000000000013656752227017612 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/utils/__init__.py0000664000175000017500000000000013656752032021703 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/utils/test_neutron.py0000664000175000017500000000616113656752032022713 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hortonworks, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.tests.unit import base from sahara.utils.openstack import neutron as neutron_client class NeutronClientTest(base.SaharaTestCase): @mock.patch("sahara.utils.openstack.keystone.token_auth") @mock.patch("neutronclient.neutron.client.Client") def test_get_router(self, patched, token_auth): patched.side_effect = _test_get_neutron_client neutron = neutron_client.NeutronClient( '33b47310-b7a8-4559-bf95-45ba669a448e', None, None) self.assertEqual('6c4d4e32-3667-4cd4-84ea-4cc1e98d18be', neutron.get_router()) def _test_get_neutron_client(api_version, *args, **kwargs): return FakeNeutronClient() class FakeNeutronClient(object): def list_routers(self): return {"routers": [{"status": "ACTIVE", "external_gateway_info": { "network_id": "61f95d3f-495e-4409-8c29-0b806283c81e"}, "name": "router1", "admin_state_up": True, "tenant_id": "903809ded3434f8d89948ee71ca9f5bb", "routes": [], "id": "6c4d4e32-3667-4cd4-84ea-4cc1e98d18be"}]} def list_ports(self, device_id=None): return {"ports": [ {"status": "ACTIVE", "name": "", "admin_state_up": True, "network_id": "33b47310-b7a8-4559-bf95-45ba669a448e", "tenant_id": "903809ded3434f8d89948ee71ca9f5bb", "binding:vif_type": "ovs", "device_owner": "compute:None", "binding:capabilities": {"port_filter": True}, "mac_address": "fa:16:3e:69:25:1c", "fixed_ips": [ {"subnet_id": "bfa9d0a1-9efb-4bff-bd2b-c103c053560f", "ip_address": "10.0.0.8"}], "id": "0f3df685-bc55-4314-9b76-835e1767b78f", "security_groups": ["f9fee2a2-bb0b-44e4-8092-93a43dc45cda"], "device_id": "c2129c18-6707-4f07-94cf-00b2fef8eea7"}, {"status": "ACTIVE", "name": "", "admin_state_up": True, "network_id": "33b47310-b7a8-4559-bf95-45ba669a448e", "tenant_id": "903809ded3434f8d89948ee71ca9f5bb", "binding:vif_type": "ovs", "device_owner": "network:router_interface", "binding:capabilities": {"port_filter": True}, "mac_address": "fa:16:3e:c5:b0:cb", "fixed_ips": [ {"subnet_id": "bfa9d0a1-9efb-4bff-bd2b-c103c053560f", "ip_address": "10.0.0.1"}], "id": "27193ae1-142a-436c-ab41-c77b1df032a1", "security_groups": [], "device_id": "6c4d4e32-3667-4cd4-84ea-4cc1e98d18be"}]} sahara-12.0.0/sahara/tests/unit/utils/test_hacking.py0000664000175000017500000000613013656752032022621 0ustar zuulzuul00000000000000# Copyright 2015 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from sahara.utils.hacking import checks class HackingTestCase(testtools.TestCase): def test_dict_constructor_with_list_copy(self): # Following checks for code-lines with pep8 error self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([(i, connect_info[i])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " attrs = dict([(k, _from_json(v))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " type_names = dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( "foo(param=dict((k, v) for k, v in bar.items()))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([[i,i] for i in range(3)])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dd = dict([i,i] for i in range(3))")))) # Following checks for ok code-lines self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " dict()")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " create_kwargs = dict(snapshot=snapshot,")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " self._render_dict(xml, data_el, data.__dict__)")))) def test_use_jsonutils(self): self.assertEqual(0, len(list(checks.use_jsonutils( "import json # noqa", "path")))) self.assertEqual(0, len(list(checks.use_jsonutils( "from oslo_serialization import jsonutils as json", "path")))) self.assertEqual(0, len(list(checks.use_jsonutils( "import jsonschema", "path")))) self.assertEqual(1, len(list(checks.use_jsonutils( "import json", "path")))) self.assertEqual(1, len(list(checks.use_jsonutils( "import json as jsonutils", "path")))) def test_no_mutable_default_args(self): self.assertEqual(0, len(list(checks.no_mutable_default_args( "def foo (bar):")))) self.assertEqual(1, len(list(checks.no_mutable_default_args( "def foo (bar=[]):")))) self.assertEqual(1, len(list(checks.no_mutable_default_args( "def foo (bar={}):")))) sahara-12.0.0/sahara/tests/unit/utils/test_crypto.py0000664000175000017500000000247713656752032022547 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from sahara.utils import crypto as c class CryptoTest(testtools.TestCase): def test_generate_key_pair(self): kp = c.generate_key_pair() self.assertIsInstance(kp, tuple) self.assertIsNotNone(kp[0]) self.assertIsNotNone(kp[1]) self.assertIn('-----BEGIN RSA PRIVATE KEY-----', kp[0]) self.assertIn('-----END RSA PRIVATE KEY-----', kp[0]) self.assertIn('ssh-rsa ', kp[1]) self.assertIn('Generated-by-Sahara', kp[1]) def test_to_paramiko_private_key(self): pk_str = c.generate_key_pair()[0] pk = c.to_paramiko_private_key(pk_str) self.assertIsNotNone(pk) self.assertEqual(2048, pk.size) self.assertEqual('ssh-rsa', pk.get_name()) sahara-12.0.0/sahara/tests/unit/utils/notification/0000775000175000017500000000000013656752227022300 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/utils/notification/__init__.py0000664000175000017500000000000013656752032024371 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/utils/notification/test_sender.py0000664000175000017500000000312713656752032025166 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara import context from sahara.tests.unit import base from sahara.utils.notification import sender class NotificationTest(base.SaharaTestCase): @mock.patch('sahara.utils.rpc.get_notifier') def test_update_cluster(self, mock_notify): class FakeNotifier(object): def info(self, *args): self.call = args notifier = FakeNotifier() mock_notify.return_value = notifier ctx = context.ctx() sender.status_notify('someId', 'someName', 'someStatus', "update") self.expected_args = (ctx, 'sahara.cluster.%s' % 'update', {'cluster_id': 'someId', 'cluster_name': 'someName', 'cluster_status': 'someStatus', 'project_id': ctx.tenant_id, 'user_id': ctx.user_id}) self.assertEqual(self.expected_args, notifier.call) sahara-12.0.0/sahara/tests/unit/utils/test_edp.py0000664000175000017500000000464113656752032021772 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock from sahara.utils import edp class EdpUtilTest(testtools.TestCase): def test_split_job_type(self): jtype, stype = edp.split_job_type(edp.JOB_TYPE_MAPREDUCE) self.assertEqual(edp.JOB_TYPE_MAPREDUCE, jtype) self.assertEqual(edp.JOB_SUBTYPE_NONE, stype) jtype, stype = edp.split_job_type(edp.JOB_TYPE_MAPREDUCE_STREAMING) self.assertEqual(edp.JOB_TYPE_MAPREDUCE, jtype) self.assertEqual(edp.JOB_SUBTYPE_STREAMING, stype) def test_compare_job_type(self): self.assertTrue(edp.compare_job_type( edp.JOB_TYPE_JAVA, edp.JOB_TYPE_JAVA, edp.JOB_TYPE_MAPREDUCE, strict=True)) self.assertFalse(edp.compare_job_type( edp.JOB_TYPE_MAPREDUCE_STREAMING, edp.JOB_TYPE_JAVA, edp.JOB_TYPE_MAPREDUCE, strict=True)) self.assertTrue(edp.compare_job_type( edp.JOB_TYPE_MAPREDUCE_STREAMING, edp.JOB_TYPE_JAVA, edp.JOB_TYPE_MAPREDUCE)) self.assertFalse(edp.compare_job_type( edp.JOB_TYPE_MAPREDUCE, edp.JOB_TYPE_JAVA, edp.JOB_TYPE_MAPREDUCE_STREAMING)) def test_get_builtin_binaries_java_available(self): job = mock.Mock(type=edp.JOB_TYPE_JAVA) configs = {edp.ADAPT_FOR_OOZIE: True} binaries = edp.get_builtin_binaries(job, configs) self.assertEqual(1, len(binaries)) binary = binaries[0] self.assertTrue(binary['name'].startswith('builtin-')) self.assertTrue(binary['name'].endswith('.jar')) self.assertIsNotNone(binary['raw']) def test_get_builtin_binaries_empty(self): for job_type in edp.JOB_TYPES_ALL: job = mock.Mock(type=job_type) self.assertEqual(0, len(edp.get_builtin_binaries(job, {}))) sahara-12.0.0/sahara/tests/unit/utils/test_general.py0000664000175000017500000000467713656752032022650 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.tests.unit import base from sahara.utils import general class UtilsGeneralTest(base.SaharaWithDbTestCase): def setUp(self): super(UtilsGeneralTest, self).setUp() def test_find_dict(self): iterable = [ { "a": 1 }, { "a": 1, "b": 2, "c": 3 }, { "a": 2 }, { "c": 3 } ] self.assertEqual({"a": 1, "b": 2, "c": 3}, general.find_dict(iterable, a=1, b=2)) self.assertIsNone(general.find_dict(iterable, z=4)) def test_find(self): lst = [mock.Mock(a=5), mock.Mock(b=5), mock.Mock(a=7, b=7)] self.assertEqual(lst[0], general.find(lst, a=5)) self.assertEqual(lst[1], general.find(lst, b=5)) self.assertIsNone(general.find(lst, a=8)) self.assertEqual(lst[2], general.find(lst, a=7)) self.assertEqual(lst[2], general.find(lst, a=7, b=7)) def test_generate_instance_name(self): inst_name = "cluster-worker-001" self.assertEqual( inst_name, general.generate_instance_name("cluster", "worker", 1)) self.assertEqual( inst_name, general.generate_instance_name("CLUSTER", "WORKER", 1)) def test_get_by_id(self): lst = [mock.Mock(id=5), mock.Mock(id=7)] self.assertIsNone(general.get_by_id(lst, 9)) self.assertEqual(lst[0], general.get_by_id(lst, 5)) self.assertEqual(lst[1], general.get_by_id(lst, 7)) def test_natural_sort_key(self): str_test = "ABC123efg345DD" str_list = ['abc', 123, 'efg', 345, 'dd'] str_sort = general.natural_sort_key(str_test) self.assertEqual(len(str_list), len(str_sort)) self.assertEqual(str_list, str_sort) sahara-12.0.0/sahara/tests/unit/utils/test_ssh_remote.py0000664000175000017500000005041513656752032023372 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import shlex from unittest import mock import testtools from sahara import exceptions as ex from sahara.tests.unit import base from sahara.utils import ssh_remote class TestEscapeQuotes(testtools.TestCase): def test_escape_quotes(self): s = ssh_remote._escape_quotes('echo "\\"Hello, world!\\""') self.assertEqual(r'echo \"\\\"Hello, world!\\\"\"', s) class TestGetOsDistrib(testtools.TestCase): @mock.patch('sahara.utils.ssh_remote._execute_command', return_value=[1, 'Ubuntu']) @mock.patch('sahara.utils.ssh_remote._get_python_to_execute', return_value='python3') def test_get_os_distrib(self, python, p_execute_command): d = ssh_remote._get_os_distrib() p_execute_command.assert_called_once_with( ('printf "import platform\nprint(platform.linux_distribution(' 'full_distribution_name=0)[0])" | python3'), run_as_root=False) self.assertEqual('ubuntu', d) class TestInstallPackages(testtools.TestCase): @mock.patch('sahara.utils.ssh_remote._get_os_version') @mock.patch('sahara.utils.ssh_remote._get_os_distrib') @mock.patch('sahara.utils.ssh_remote._execute_command') def test_install_packages(self, p_execute_command, p_get_os_distrib, p_get_os_version): packages = ('git', 'emacs', 'tree') # test ubuntu p_get_os_distrib.return_value = 'ubuntu' ssh_remote._install_packages(packages) p_execute_command.assert_called_with( 'RUNLEVEL=1 apt-get install -y git emacs tree', run_as_root=True) # test centos p_get_os_distrib.return_value = 'centos' ssh_remote._install_packages(packages) p_execute_command.assert_called_with( 'yum install -y git emacs tree', run_as_root=True) # test fedora < 22 p_get_os_distrib.return_value = 'fedora' p_get_os_version.return_value = 20 ssh_remote._install_packages(packages) p_execute_command.assert_called_with( 'yum install -y git emacs tree', run_as_root=True) # test fedora >=22 p_get_os_distrib.return_value = 'fedora' p_get_os_version.return_value = 23 ssh_remote._install_packages(packages) p_execute_command.assert_called_with( 'dnf install -y git emacs tree', run_as_root=True) # test redhat p_get_os_distrib.return_value = 'redhat' ssh_remote._install_packages(packages) p_execute_command.assert_called_with( 'yum install -y git emacs tree', run_as_root=True) @mock.patch('sahara.utils.ssh_remote._get_os_distrib', return_value='windows me') def test_install_packages_bad(self, p_get_os_distrib): with testtools.ExpectedException( ex.NotImplementedException, 'Package Installation is not implemented for OS windows me.*'): ssh_remote._install_packages(('git', 'emacs', 'tree')) class TestUpdateRepository(testtools.TestCase): @mock.patch('sahara.utils.ssh_remote._get_os_version') @mock.patch('sahara.utils.ssh_remote._get_os_distrib') @mock.patch('sahara.utils.ssh_remote._execute_command') def test_update_repository(self, p_execute_command, p_get_os_distrib, p_get_os_version): # test ubuntu p_get_os_distrib.return_value = 'ubuntu' ssh_remote._update_repository() p_execute_command.assert_called_with( 'apt-get update', run_as_root=True) # test centos p_get_os_distrib.return_value = 'centos' ssh_remote._update_repository() p_execute_command.assert_called_with( 'yum clean all', run_as_root=True) # test fedora < 22 p_get_os_distrib.return_value = 'fedora' p_get_os_version.return_value = 20 ssh_remote._update_repository() p_execute_command.assert_called_with( 'yum clean all', run_as_root=True) # test fedora >=22 p_get_os_distrib.return_value = 'fedora' p_get_os_version.return_value = 23 ssh_remote._update_repository() p_execute_command.assert_called_with( 'dnf clean all', run_as_root=True) # test redhat p_get_os_distrib.return_value = 'redhat' ssh_remote._update_repository() p_execute_command.assert_called_with( 'yum clean all', run_as_root=True) @mock.patch('sahara.utils.ssh_remote._get_os_distrib', return_value='windows me') def test_update_repository_bad(self, p_get_os_distrib): with testtools.ExpectedException( ex.NotImplementedException, 'Repository Update is not implemented for OS windows me.*'): ssh_remote._update_repository() class FakeCluster(object): def __init__(self, priv_key): self.management_private_key = priv_key self.neutron_management_network = 'network1' def has_proxy_gateway(self): return False def get_proxy_gateway_node(self): return None class FakeNodeGroup(object): def __init__(self, user, priv_key): self.image_username = user self.cluster = FakeCluster(priv_key) self.floating_ip_pool = 'public' class FakeInstance(object): def __init__(self, inst_name, inst_id, management_ip, internal_ip, user, priv_key): self.instance_name = inst_name self.instance_id = inst_id self.management_ip = management_ip self.internal_ip = internal_ip self.node_group = FakeNodeGroup(user, priv_key) @property def cluster(self): return self.node_group.cluster class TestInstanceInteropHelper(base.SaharaTestCase): def setUp(self): super(TestInstanceInteropHelper, self).setUp() p_sma = mock.patch('sahara.utils.ssh_remote._acquire_remote_semaphore') p_sma.start() p_smr = mock.patch('sahara.utils.ssh_remote._release_remote_semaphore') p_smr.start() p_neutron_router = mock.patch( 'sahara.utils.openstack.neutron.NeutronClient.get_router', return_value='fakerouter') p_neutron_router.start() # During tests subprocesses are not used (because _sahara-subprocess # is not installed in /bin and Mock objects cannot be pickled). p_start_subp = mock.patch('sahara.utils.procutils.start_subprocess', return_value=42) p_start_subp.start() p_run_subp = mock.patch('sahara.utils.procutils.run_in_subprocess') self.run_in_subprocess = p_run_subp.start() p_shut_subp = mock.patch('sahara.utils.procutils.shutdown_subprocess') p_shut_subp.start() self.patchers = [p_sma, p_smr, p_neutron_router, p_start_subp, p_run_subp, p_shut_subp] def tearDown(self): for patcher in self.patchers: patcher.stop() super(TestInstanceInteropHelper, self).tearDown() def setup_context(self, username="test_user", tenant_id="tenant_1", token="test_auth_token", tenant_name='test_tenant', **kwargs): service_catalog = '''[ { "type": "network", "endpoints": [ { "region": "RegionOne", "publicURL": "http://localhost/" } ] } ]''' super(TestInstanceInteropHelper, self).setup_context( username=username, tenant_id=tenant_id, token=token, tenant_name=tenant_name, service_catalog=service_catalog, **kwargs) # When use_floating_ips=True, no proxy should be used: _connect is called # with proxy=None and ProxiedHTTPAdapter is not used. @mock.patch('sahara.utils.ssh_remote.ProxiedHTTPAdapter') def test_use_floating_ips(self, p_adapter): self.override_config('use_floating_ips', True) instance = FakeInstance('inst1', '123', '10.0.0.1', '10.0.0.1', 'user1', 'key1') remote = ssh_remote.InstanceInteropHelper(instance) # Test SSH remote.execute_command('/bin/true') self.run_in_subprocess.assert_any_call( 42, ssh_remote._connect, ('10.0.0.1', 'user1', 'key1', None, None, None)) # Test HTTP remote.get_http_client(8080) self.assertFalse(p_adapter.called) # When use_floating_ips=False and use_namespaces=True, a netcat socket # created with 'ip netns exec qrouter-...' should be used to access # instances. @mock.patch("sahara.service.trusts.get_os_admin_auth_plugin") @mock.patch("sahara.utils.openstack.keystone.token_auth") @mock.patch('sahara.utils.ssh_remote._simple_exec_func') @mock.patch('sahara.utils.ssh_remote.ProxiedHTTPAdapter') def test_use_namespaces(self, p_adapter, p_simple_exec_func, token_auth, use_os_admin): self.override_config('use_floating_ips', False) self.override_config('use_namespaces', True) instance = FakeInstance('inst2', '123', '10.0.0.2', '10.0.0.2', 'user2', 'key2') remote = ssh_remote.InstanceInteropHelper(instance) # Test SSH remote.execute_command('/bin/true') self.run_in_subprocess.assert_any_call( 42, ssh_remote._connect, ('10.0.0.2', 'user2', 'key2', 'ip netns exec qrouter-fakerouter nc 10.0.0.2 22', None, None)) # Test HTTP remote.get_http_client(8080) p_adapter.assert_called_once_with( p_simple_exec_func(), '10.0.0.2', 8080) p_simple_exec_func.assert_any_call( shlex.split('ip netns exec qrouter-fakerouter nc 10.0.0.2 8080')) # When proxy_command is set, a user-defined netcat socket should be used to # access instances. @mock.patch('sahara.utils.ssh_remote._simple_exec_func') @mock.patch('sahara.utils.ssh_remote.ProxiedHTTPAdapter') def test_proxy_command(self, p_adapter, p_simple_exec_func): self.override_config('proxy_command', 'ssh fakerelay nc {host} {port}') instance = FakeInstance('inst3', '123', '10.0.0.3', '10.0.0.3', 'user3', 'key3') remote = ssh_remote.InstanceInteropHelper(instance) # Test SSH remote.execute_command('/bin/true') self.run_in_subprocess.assert_any_call( 42, ssh_remote._connect, ('10.0.0.3', 'user3', 'key3', 'ssh fakerelay nc 10.0.0.3 22', None, None)) # Test HTTP remote.get_http_client(8080) p_adapter.assert_called_once_with( p_simple_exec_func(), '10.0.0.3', 8080) p_simple_exec_func.assert_any_call( shlex.split('ssh fakerelay nc 10.0.0.3 8080')) @mock.patch('sahara.utils.ssh_remote._simple_exec_func') @mock.patch('sahara.utils.ssh_remote.ProxiedHTTPAdapter') def test_proxy_command_internal_ip(self, p_adapter, p_simple_exec_func): self.override_config('proxy_command', 'ssh fakerelay nc {host} {port}') self.override_config('proxy_command_use_internal_ip', True) instance = FakeInstance('inst3', '123', '10.0.0.3', '10.0.0.4', 'user3', 'key3') remote = ssh_remote.InstanceInteropHelper(instance) # Test SSH remote.execute_command('/bin/true') self.run_in_subprocess.assert_any_call( 42, ssh_remote._connect, ('10.0.0.4', 'user3', 'key3', 'ssh fakerelay nc 10.0.0.4 22', None, None)) # Test HTTP remote.get_http_client(8080) p_adapter.assert_called_once_with( p_simple_exec_func(), '10.0.0.4', 8080) p_simple_exec_func.assert_any_call( shlex.split('ssh fakerelay nc 10.0.0.4 8080')) def test_proxy_command_bad(self): self.override_config('proxy_command', '{bad_kw} nc {host} {port}') instance = FakeInstance('inst4', '123', '10.0.0.4', '10.0.0.4', 'user4', 'key4') remote = ssh_remote.InstanceInteropHelper(instance) # Test SSH self.assertRaises(ex.SystemError, remote.execute_command, '/bin/true') # Test HTTP self.assertRaises(ex.SystemError, remote.get_http_client, 8080) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') def test_get_os_distrib(self, p_run_s): instance = FakeInstance('inst4', '123', '10.0.0.4', '10.0.0.4', 'user4', 'key4') remote = ssh_remote.InstanceInteropHelper(instance) remote.get_os_distrib() p_run_s.assert_called_with(ssh_remote._get_os_distrib, None, "get_os_distrib") @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_install_packages(self, p_log_command, p_run_s): instance = FakeInstance('inst5', '123', '10.0.0.5', '10.0.0.5', 'user5', 'key5') remote = ssh_remote.InstanceInteropHelper(instance) packages = ['pkg1', 'pkg2'] remote.install_packages(packages) description = 'Installing packages "%s"' % list(packages) p_run_s.assert_called_once_with( ssh_remote._install_packages, None, description, packages) p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_update_repository(self, p_log_command, p_run_s): instance = FakeInstance('inst6', '123', '10.0.0.6', '10.0.0.6', 'user6', 'key6') remote = ssh_remote.InstanceInteropHelper(instance) remote.update_repository() p_run_s.assert_called_once_with(ssh_remote._update_repository, None, 'Updating repository') p_log_command.assert_called_with('Updating repository') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_write_file_to(self, p_log_command, p_run_s): instance = FakeInstance('inst7', '123', '10.0.0.7', '10.0.0.7', 'user7', 'key7') remote = ssh_remote.InstanceInteropHelper(instance) description = 'Writing file "file"' remote.write_file_to("file", "data") p_run_s.assert_called_once_with(ssh_remote._write_file_to, None, description, "file", "data", False) p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_write_files_to(self, p_log_command, p_run_s): instance = FakeInstance('inst8', '123', '10.0.0.8', '10.0.0.8', 'user8', 'key8') remote = ssh_remote.InstanceInteropHelper(instance) description = 'Writing files "[\'file\']"' remote.write_files_to({"file": "data"}) p_run_s.assert_called_once_with(ssh_remote._write_files_to, None, description, {"file": "data"}, False) p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_append_to_file(self, p_log_command, p_run_s): instance = FakeInstance('inst9', '123', '10.0.0.9', '10.0.0.9', 'user9', 'key9') remote = ssh_remote.InstanceInteropHelper(instance) description = 'Appending to file "file"' remote.append_to_file("file", "data") p_run_s.assert_called_once_with(ssh_remote._append_to_file, None, description, "file", "data", False) p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_append_to_files(self, p_log_command, p_run_s): instance = FakeInstance('inst10', '123', '10.0.0.10', '10.0.0.10', 'user10', 'key10') remote = ssh_remote.InstanceInteropHelper(instance) description = 'Appending to files "[\'file\']"' remote.append_to_files({"file": "data"}) p_run_s.assert_called_once_with(ssh_remote._append_to_files, None, description, {"file": "data"}, False) p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_read_file_from(self, p_log_command, p_run_s): instance = FakeInstance('inst11', '123', '10.0.0.11', '10.0.0.11', 'user11', 'key11') remote = ssh_remote.InstanceInteropHelper(instance) description = 'Reading file "file"' remote.read_file_from("file") p_run_s.assert_called_once_with(ssh_remote._read_file_from, None, description, "file", False) p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_replace_remote_string(self, p_log_command, p_run_s): instance = FakeInstance('inst12', '123', '10.0.0.12', '10.0.0.12', 'user12', 'key12') remote = ssh_remote.InstanceInteropHelper(instance) description = 'In file "file" replacing string "str1" with "str2"' remote.replace_remote_string("file", "str1", "str2") p_run_s.assert_called_once_with(ssh_remote._replace_remote_string, None, description, "file", "str1", "str2") p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_replace_remote_line(self, p_log_command, p_run_s): instance = FakeInstance('inst13', '123', '10.0.0.13', '10.0.0.13', 'user13', 'key13') remote = ssh_remote.InstanceInteropHelper(instance) description = ('In file "file" replacing line begining with string ' '"str" with "newline"') remote.replace_remote_line("file", "str", "newline") p_run_s.assert_called_once_with(ssh_remote._replace_remote_line, None, description, "file", "str", "newline") p_log_command.assert_called_with(description) @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._run_s') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper._log_command') def test_execute_on_vm_interactive(self, p_log_command, p_run_s): instance = FakeInstance('inst14', '123', '10.0.0.14', '10.0.0.14', 'user14', 'key14') remote = ssh_remote.InstanceInteropHelper(instance) description = 'Executing interactively "factor 42"' remote.execute_on_vm_interactive("factor 42", None) p_run_s.assert_called_once_with(ssh_remote._execute_on_vm_interactive, None, description, "factor 42", None) p_log_command(description) sahara-12.0.0/sahara/tests/unit/utils/test_patches.py0000664000175000017500000000432713656752032022652 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import xml.dom.minidom as xml import six import testtools class MinidomPatchesTest(testtools.TestCase): def setUp(self): super(MinidomPatchesTest, self).setUp() def _generate_n_prettify_xml(self): doc = xml.Document() pi = doc.createProcessingInstruction('xml-smth', 'type="text/smth" ' 'href="test.smth"') doc.insertBefore(pi, doc.firstChild) configuration = doc.createElement("root") doc.appendChild(configuration) for idx in six.moves.xrange(0, 5): elem = doc.createElement("element") configuration.appendChild(elem) name = doc.createElement("name") elem.appendChild(name) name_text = doc.createTextNode("key-%s" % idx) name.appendChild(name_text) value = doc.createElement("value") elem.appendChild(value) value_text = doc.createTextNode("value-%s" % idx) value.appendChild(value_text) return doc.toprettyxml(indent=" ") def test_minidom_toprettyxml(self): self.assertEqual(""" key-0 value-0 key-1 value-1 key-2 value-2 key-3 value-3 key-4 value-4 """, self._generate_n_prettify_xml()) sahara-12.0.0/sahara/tests/unit/utils/test_cluster.py0000664000175000017500000001656713656752032022715 0ustar zuulzuul00000000000000# Copyright (c) 2015 Intel Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara import conductor from sahara import context from sahara.tests.unit import base from sahara.tests.unit.conductor import test_api from sahara.utils import cluster as cluster_utils class UtilsClusterTest(base.SaharaWithDbTestCase): def setUp(self): super(UtilsClusterTest, self).setUp() self.api = conductor.API def _make_sample(self): ctx = context.ctx() cluster = self.api.cluster_create(ctx, test_api.SAMPLE_CLUSTER) return cluster def test_change_cluster_status(self): cluster = self._make_sample() cluster = cluster_utils.change_cluster_status( cluster, cluster_utils.CLUSTER_STATUS_DELETING, "desc") self.assertEqual(cluster_utils.CLUSTER_STATUS_DELETING, cluster.status) self.assertEqual("desc", cluster.status_description) cluster_utils.change_cluster_status( cluster, cluster_utils.CLUSTER_STATUS_SPAWNING) self.assertEqual(cluster_utils.CLUSTER_STATUS_DELETING, cluster.status) def test_change_status_description(self): ctx = context.ctx() cluster = self._make_sample() cluster_id = cluster.id cluster = cluster_utils.change_cluster_status_description( cluster, "desc") self.assertEqual('desc', cluster.status_description) self.api.cluster_destroy(ctx, cluster) cluster = cluster_utils.change_cluster_status_description( cluster_id, "desc") self.assertIsNone(cluster) def test_count_instances(self): cluster = self._make_sample() self.assertEqual(4, cluster_utils.count_instances(cluster)) def test_check_cluster_exists(self): ctx = context.ctx() cluster = self._make_sample() self.assertTrue(cluster_utils.check_cluster_exists(cluster)) self.api.cluster_destroy(ctx, cluster) self.assertFalse(cluster_utils.check_cluster_exists(cluster)) def test_get_instances(self): cluster = self._make_sample() ctx = context.ctx() idx = 0 ids = [] for ng in cluster.node_groups: for i in range(ng.count): idx += 1 ids.append(self.api.instance_add(ctx, ng, { 'instance_id': str(idx), 'instance_name': str(idx), })) cluster = self.api.cluster_get(ctx, cluster) instances = cluster_utils.get_instances(cluster, ids) ids = set() for inst in instances: ids.add(inst.instance_id) self.assertEqual(idx, len(ids)) for i in range(1, idx): self.assertIn(str(i), ids) instances = cluster_utils.get_instances(cluster) ids = set() for inst in instances: ids.add(inst.instance_id) self.assertEqual(idx, len(ids)) for i in range(1, idx): self.assertIn(str(i), ids) def test_clean_cluster_from_empty_ng(self): ctx = context.ctx() cluster = self._make_sample() ng = cluster.node_groups[0] ng_len = len(cluster.node_groups) self.api.node_group_update(ctx, ng, {'count': 0}) cluster = self.api.cluster_get(ctx, cluster.id) cluster_utils.clean_cluster_from_empty_ng(cluster) cluster = self.api.cluster_get(ctx, cluster.id) self.assertEqual(ng_len - 1, len(cluster.node_groups)) @mock.patch("sahara.conductor.objects.Cluster.use_designate_feature") @mock.patch("socket.gethostbyname") @mock.patch("sahara.utils.openstack.base.url_for") def test_generate_etc_hosts(self, mock_url, mock_get_host, mock_use_designate): cluster = self._make_sample() mock_use_designate.return_value = False ctx = context.ctx() idx = 0 for ng in cluster.node_groups: for i in range(ng.count): idx += 1 self.api.instance_add(ctx, ng, { 'instance_id': str(idx), 'instance_name': str(idx), 'internal_ip': str(idx), }) cluster = self.api.cluster_get(ctx, cluster) mock_url.side_effect = ["http://keystone.local:1234/v13", "http://swift.local:5678/v42"] mock_get_host.side_effect = ["1.2.3.4", "5.6.7.8"] value = cluster_utils.generate_etc_hosts(cluster) expected = ("127.0.0.1 localhost\n" "1 1.novalocal 1\n" "2 2.novalocal 2\n" "3 3.novalocal 3\n" "4 4.novalocal 4\n" "1.2.3.4 keystone.local\n" "5.6.7.8 swift.local\n") self.assertEqual(expected, value) @mock.patch("sahara.conductor.objects.Cluster.use_designate_feature") @mock.patch("socket.gethostbyname") @mock.patch("sahara.utils.openstack.base.url_for") def test_generate_etc_hosts_with_designate(self, mock_url, mock_get_host, mock_use_designate): cluster = self._make_sample() mock_use_designate.return_value = True mock_url.side_effect = ["http://keystone.local:1234/v13", "http://swift.local:5678/v42"] mock_get_host.side_effect = ["1.2.3.4", "5.6.7.8"] value = cluster_utils.generate_etc_hosts(cluster) expected = ("127.0.0.1 localhost\n" "1.2.3.4 keystone.local\n" "5.6.7.8 swift.local\n") self.assertEqual(expected, value) def test_generate_resolv_conf_diff(self): curr_resolv_conf = "search openstacklocal\nnameserver 8.8.8.8\n" self.override_config("nameservers", ['1.1.1.1']) value = cluster_utils.generate_resolv_conf_diff(curr_resolv_conf) expected = "nameserver 1.1.1.1\n" self.assertEqual(expected, value) self.override_config("nameservers", ['1.1.1.1', '8.8.8.8', '2.2.2.2']) value = cluster_utils.generate_resolv_conf_diff(curr_resolv_conf) expected = ("nameserver 1.1.1.1\n" "nameserver 2.2.2.2\n") self.assertEqual(expected, value) @mock.patch("socket.gethostbyname") @mock.patch("sahara.utils.openstack.base.url_for") def test_etc_hosts_entry_for_service_overrides(self, mock_url, mock_get_host): self.override_config("object_store_ip_accessible", None) mock_url.return_value = "http://swift.org" mock_get_host.return_value = '1.1.1.1' res = cluster_utils.etc_hosts_entry_for_service('object-store') self.assertEqual('1.1.1.1 swift.org\n', res) self.override_config("object_store_ip_accessible", '2.2.2.2') res = cluster_utils.etc_hosts_entry_for_service('object-store') self.assertEqual('2.2.2.2 swift.org\n', res) sahara-12.0.0/sahara/tests/unit/utils/test_cinder.py0000664000175000017500000000713013656752032022462 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2014 Adrien Vergé # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from keystoneauth1 import exceptions as keystone_exceptions from oslo_config import cfg from sahara import main from sahara.tests.unit import base as test_base from sahara.utils.openstack import cinder CONF = cfg.CONF class TestCinder(test_base.SaharaTestCase): def setup_context(self, username="test_user", tenant_id="tenant_1", token="test_auth_token", tenant_name='test_tenant', **kwargs): self.override_config('os_region_name', 'RegionOne') # Fake service_catalog with both volumev2 # and volumev3 services available service_catalog = '''[ { "type": "volumev2", "endpoints": [ { "region": "RegionOne", "internalURL": "http://localhost/" } ] }, { "type": "volumev3", "endpoints": [ { "region": "RegionOne", "internalURL": "http://localhost/" } ] }]''' super(TestCinder, self).setup_context( username=username, tenant_id=tenant_id, token=token, tenant_name=tenant_name, service_catalog=service_catalog, **kwargs) @mock.patch('sahara.utils.openstack.keystone.auth') @mock.patch('cinderclient.v3.client.Client') @mock.patch('cinderclient.v2.client.Client') def test_get_cinder_client_api_v2(self, patched2, patched3, auth): self.override_config('api_version', 2, group='cinder') patched2.return_value = FakeCinderClient(2) patched3.return_value = FakeCinderClient(3) client = cinder.client() self.assertEqual(2, client.client.api_version) @mock.patch('sahara.utils.openstack.keystone.auth') @mock.patch('cinderclient.v3.client.Client') @mock.patch('cinderclient.v2.client.Client') def test_get_cinder_client_api_v3(self, patched2, patched3, auth): self.override_config('api_version', 3, group='cinder') patched2.return_value = FakeCinderClient(2) patched3.return_value = FakeCinderClient(3) client = cinder.client() self.assertEqual(3, client.client.api_version) def test_cinder_bad_api_version(self): self.override_config('api_version', 1, group='cinder') cinder.validate_config() # Check bad version falls back to latest supported version self.assertEqual(3, main.CONF.cinder.api_version) @mock.patch('sahara.utils.openstack.base.url_for') def test_check_cinder_exists(self, mock_url_for): mock_url_for.return_value = None self.assertTrue(cinder.check_cinder_exists()) mock_url_for.reset_mock() mock_url_for.side_effect = keystone_exceptions.EndpointNotFound() self.assertFalse(cinder.check_cinder_exists()) class FakeCinderClient(object): def __init__(self, api_version): class FakeCinderHTTPClient(object): def __init__(self, api_version): self.api_version = api_version self.client = FakeCinderHTTPClient(api_version) sahara-12.0.0/sahara/tests/unit/utils/test_api_validator.py0000664000175000017500000002472213656752032024042 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import jsonschema from oslo_utils import uuidutils import testtools from sahara.utils import api_validator def _validate(schema, data): validator = api_validator.ApiValidator(schema) validator.validate(data) class ApiValidatorTest(testtools.TestCase): def _validate_success(self, schema, data): return _validate(schema, data) def _validate_failure(self, schema, data): self.assertRaises(jsonschema.ValidationError, _validate, schema, data) def test_validate_required(self): schema = { "type": "object", "properties": { "prop-1": { "type": "string", }, }, } self._validate_success(schema, { "prop-1": "asd", }) self._validate_success(schema, { "prop-2": "asd", }) schema["required"] = ["prop-1"] self._validate_success(schema, { "prop-1": "asd", }) self._validate_failure(schema, { "prop-2": "asd", }) def test_validate_additionalProperties(self): schema = { "type": "object", "properties": { "prop-1": { "type": "string", }, }, "required": ["prop-1"] } self._validate_success(schema, { "prop-1": "asd", }) self._validate_success(schema, { "prop-1": "asd", "prop-2": "asd", }) schema["additionalProperties"] = True self._validate_success(schema, { "prop-1": "asd", }) self._validate_success(schema, { "prop-1": "asd", "prop-2": "asd", }) schema["additionalProperties"] = False self._validate_success(schema, { "prop-1": "asd", }) self._validate_failure(schema, { "prop-1": "asd", "prop-2": "asd", }) def test_validate_string(self): schema = { "type": "string", } self._validate_success(schema, "asd") self._validate_success(schema, "") self._validate_failure(schema, 1) self._validate_failure(schema, 1.5) self._validate_failure(schema, True) def test_validate_string_with_length(self): schema = { "type": "string", "minLength": 1, "maxLength": 10, } self._validate_success(schema, "a") self._validate_success(schema, "a" * 10) self._validate_failure(schema, "") self._validate_failure(schema, "a" * 11) def test_validate_integer(self): schema = { 'type': 'integer', } self._validate_success(schema, 0) self._validate_success(schema, 1) self._validate_failure(schema, "1") self._validate_failure(schema, "a") self._validate_failure(schema, True) def test_validate_integer_w_range(self): schema = { 'type': 'integer', 'minimum': 1, 'maximum': 10, } self._validate_success(schema, 1) self._validate_success(schema, 10) self._validate_failure(schema, 0) self._validate_failure(schema, 11) def test_validate_uuid(self): schema = { "type": "string", "format": "uuid", } id = uuidutils.generate_uuid() self._validate_success(schema, id) self._validate_success(schema, id.replace("-", "")) def test_validate_valid_name(self): schema = { "type": "string", "format": "valid_name", } self._validate_success(schema, "abcd") self._validate_success(schema, "abcd123") self._validate_success(schema, "abcd-123") self._validate_success(schema, "abcd_123") self._validate_failure(schema, "_123") self._validate_success(schema, "a" * 64) self._validate_failure(schema, "") self._validate_success(schema, "hadoop-examples-2.6.0.jar") self._validate_success(schema, "hadoop-examples-2.6.0") self._validate_success(schema, "hadoop-examples-2.6.0.") self._validate_success(schema, "1") self._validate_success(schema, "1a") self._validate_success(schema, "a1") self._validate_success(schema, "A1") self._validate_success(schema, "A1B") self._validate_success(schema, "a.b") self._validate_success(schema, "a..b") self._validate_success(schema, "a._.b") self._validate_success(schema, "a_") self._validate_success(schema, "a-b-001") self._validate_failure(schema, "-aaaa-bbbb") self._validate_failure(schema, ".aaaa-bbbb") self._validate_failure(schema, None) self._validate_failure(schema, 1) self._validate_failure(schema, ["1"]) def test_validate_valid_keypair_name(self): schema = { "type": "string", "format": "valid_keypair_name", } self._validate_success(schema, "abcd") self._validate_success(schema, "abcd123") self._validate_success(schema, "abcd-123") self._validate_success(schema, "abcd_123") self._validate_success(schema, "_123") self._validate_success(schema, "a" * 64) self._validate_failure(schema, "") self._validate_failure(schema, "hadoop-examples-2.6.0.jar") self._validate_failure(schema, "hadoop-examples-2.6.0") self._validate_failure(schema, "hadoop-examples-2.6.0.") self._validate_success(schema, "1") self._validate_success(schema, "1a") self._validate_success(schema, "a1") self._validate_success(schema, "A1") self._validate_success(schema, "A1B") self._validate_failure(schema, "a.b") self._validate_failure(schema, "a..b") self._validate_failure(schema, "a._.b") self._validate_success(schema, "a_") self._validate_success(schema, "a-b-001") self._validate_success(schema, "-aaaa-bbbb") self._validate_success(schema, "-aaaa bbbb") self._validate_success(schema, " -aaaa bbbb") self._validate_failure(schema, ".aaaa-bbbb") self._validate_failure(schema, None) self._validate_failure(schema, 1) self._validate_failure(schema, ["1"]) def test_validate_valid_name_hostname(self): schema = { "type": "string", "format": "valid_name_hostname", "minLength": 1, } self._validate_success(schema, "abcd") self._validate_success(schema, "abcd123") self._validate_success(schema, "abcd-123") self._validate_failure(schema, "abcd_123") self._validate_failure(schema, "_123") self._validate_success(schema, "a" * 64) self._validate_failure(schema, "") self._validate_failure(schema, "hadoop-examples-2.6.0.jar") self._validate_failure(schema, "hadoop-examples-2.6.0") self._validate_failure(schema, "hadoop-examples-2.6.0.") self._validate_failure(schema, "1") self._validate_failure(schema, "1a") self._validate_success(schema, "a1") self._validate_success(schema, "A1") self._validate_success(schema, "A1B") self._validate_success(schema, "aB") self._validate_success(schema, "a.b") self._validate_failure(schema, "a..b") self._validate_failure(schema, "a._.b") self._validate_failure(schema, "a_") self._validate_success(schema, "a-b-001") self._validate_failure(schema, None) self._validate_failure(schema, 1) self._validate_failure(schema, ["1"]) def test_validate_hostname(self): schema = { "type": "string", "format": "hostname", } self._validate_success(schema, "abcd") self._validate_success(schema, "abcd123") self._validate_success(schema, "abcd-123") self._validate_failure(schema, "abcd_123") self._validate_failure(schema, "_123") self._validate_failure(schema, "a" * 64) self._validate_failure(schema, "") def test_validate_configs(self): schema = { "type": "object", "properties": { "configs": { "type": "configs", } }, "additionalProperties": False } self._validate_success(schema, { "configs": { "at-1": { "c-1": "c", "c-2": 1, "c-3": True, }, "at-2": { "c-4": "c", "c-5": 1, "c-6": True, }, }, }) self._validate_failure(schema, { "configs": { "at-1": { "c-1": 1.5 }, } }) self._validate_failure(schema, { "configs": { 1: { "c-1": "c" }, } }) self._validate_failure(schema, { "configs": { "at-1": { 1: "asd", }, } }) self._validate_failure(schema, { "configs": { "at-1": [ "a", "b", "c", ], } }) def test_validate_flavor(self): schema = { 'type': "flavor", } self._validate_success(schema, 0) self._validate_success(schema, 1) self._validate_success(schema, "0") self._validate_success(schema, "1") self._validate_success(schema, uuidutils.generate_uuid()) self._validate_failure(schema, True) self._validate_failure(schema, 0.1) self._validate_failure(schema, "0.1") self._validate_failure(schema, "asd") sahara-12.0.0/sahara/tests/unit/utils/test_proxy.py0000664000175000017500000001326313656752032022403 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils from sahara.service.edp import job_utils from sahara.tests.unit import base from sahara.utils import proxy as p class TestProxyUtils(base.SaharaWithDbTestCase): def setUp(self): super(TestProxyUtils, self).setUp() @mock.patch('sahara.service.castellan.utils.store_secret') @mock.patch('sahara.context.ctx') @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.service.trusts.create_trust') @mock.patch('sahara.utils.openstack.keystone.auth_for_proxy') @mock.patch('sahara.utils.openstack.keystone.auth') @mock.patch('sahara.utils.proxy.proxy_user_create') def test_create_proxy_user_for_job_execution(self, proxy_user, trustor, trustee, trust, job_execution_update, context_current, passwd): job_execution = mock.Mock(id=1, output_id=2, job_id=3, job_configs=None) job_execution.job_configs = mock.Mock(to_dict=mock.Mock( return_value={} )) proxy_user.return_value = "proxy_user" passwd.return_value = "test_password" trustor.return_value = "test_trustor" trustee.return_value = "test_trustee" trust.return_value = "123456" ctx = mock.Mock() context_current.return_value = ctx p.create_proxy_user_for_job_execution(job_execution) update = {'job_configs': {'proxy_configs': None}} update['job_configs']['proxy_configs'] = { 'proxy_username': 'job_1', 'proxy_password': 'test_password', 'proxy_trust_id': '123456' } job_execution_update.assert_called_with(ctx, job_execution, update) @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.conductor.API.data_source_get') @mock.patch('sahara.conductor.API.data_source_count') @mock.patch('sahara.context.ctx') def test_job_execution_requires_proxy_user(self, ctx, data_source_count, data_source, job): self.override_config('use_domain_for_proxy_users', True) job_execution = mock.Mock(input_id=1, output_id=2, job_id=3, job_configs={}) data_source.return_value = mock.Mock(url='swift://container/object') self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) data_source.return_value = mock.Mock(url='') job.return_value = mock.Mock( mains=[mock.Mock(url='swift://container/object')]) self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) job.return_value = mock.Mock( mains=[], libs=[mock.Mock(url='swift://container/object')]) self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) job_execution.job_configs = {'args': ['swift://container/object']} job.return_value = mock.Mock( mains=[], libs=[]) self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) job_execution.job_configs = { 'configs': {'key': 'swift://container/object'}} self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) job_execution.job_configs = { 'params': {'key': 'swift://container/object'}} self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) data_source_count.return_value = 0 job_execution.job_configs = { 'configs': {job_utils.DATA_SOURCE_SUBST_NAME: True}} job.return_value = mock.Mock( mains=[], libs=[]) self.assertFalse(p.job_execution_requires_proxy_user(job_execution)) ctx.return_value = 'dummy' data_source_count.return_value = 1 job_execution.job_configs = { 'configs': {job_utils.DATA_SOURCE_SUBST_NAME: True}, 'args': [job_utils.DATA_SOURCE_PREFIX+'somevalue']} self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) data_source_count.assert_called_with('dummy', name=('somevalue',), url='swift://%') data_source_count.reset_mock() data_source_count.return_value = 1 myid = uuidutils.generate_uuid() job_execution.job_configs = { 'configs': {job_utils.DATA_SOURCE_SUBST_UUID: True}, 'args': [myid]} job.return_value = mock.Mock( mains=[], libs=[]) self.assertTrue(p.job_execution_requires_proxy_user(job_execution)) data_source_count.assert_called_with('dummy', id=(myid,), url='swift://%') sahara-12.0.0/sahara/tests/unit/utils/test_rpc.py0000664000175000017500000000700413656752032022002 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara import main from sahara.tests.unit import base from sahara.utils import rpc as messaging class TestMessagingSetup(base.SaharaTestCase): def setUp(self): super(TestMessagingSetup, self).setUp() self.patchers = [] notifier_init_patch = mock.patch('oslo_messaging.Notifier') self.notifier_init = notifier_init_patch.start() self.patchers.append(notifier_init_patch) get_notif_transp_patch = mock.patch( 'oslo_messaging.get_notification_transport') self.get_notify_transport = get_notif_transp_patch.start() self.patchers.append(get_notif_transp_patch) get_transport_patch = mock.patch('oslo_messaging.get_rpc_transport') self.get_transport = get_transport_patch.start() self.patchers.append(get_transport_patch) set_def_patch = mock.patch('oslo_messaging.set_transport_defaults') self.set_transport_def = set_def_patch.start() self.patchers.append(set_def_patch) def tearDown(self): messaging.NOTIFICATION_TRANSPORT = None messaging.MESSAGING_TRANSPORT = None messaging.NOTIFIER = None for patch in reversed(self.patchers): patch.stop() super(TestMessagingSetup, self).tearDown() def test_set_defaults(self): messaging.setup('distributed') self.assertIsNotNone(messaging.MESSAGING_TRANSPORT) self.assertIsNotNone(messaging.NOTIFICATION_TRANSPORT) self.assertIsNotNone(messaging.NOTIFIER) expected = [ mock.call('sahara') ] self.assertEqual(expected, self.set_transport_def.call_args_list) self.assertEqual( [mock.call(main.CONF)], self.get_transport.call_args_list) self.assertEqual( [mock.call(main.CONF)], self.get_notify_transport.call_args_list) self.assertEqual(1, self.notifier_init.call_count) def test_fallback(self): self.get_notify_transport.side_effect = ValueError() messaging.setup('distributed') self.assertIsNotNone(messaging.MESSAGING_TRANSPORT) self.assertIsNotNone(messaging.NOTIFICATION_TRANSPORT) self.assertEqual( messaging.MESSAGING_TRANSPORT, messaging.NOTIFICATION_TRANSPORT) self.assertIsNotNone(messaging.NOTIFIER) expected = [ mock.call('sahara') ] self.assertEqual(expected, self.set_transport_def.call_args_list) self.assertEqual( [mock.call(main.CONF)], self.get_transport.call_args_list) self.assertEqual( [mock.call(main.CONF)], self.get_notify_transport.call_args_list) self.assertEqual(1, self.notifier_init.call_count) def test_only_notifications(self): messaging.setup('all-in-one') self.assertEqual(0, self.get_transport.call_count) self.assertEqual(1, self.get_notify_transport.call_count) sahara-12.0.0/sahara/tests/unit/utils/test_xml_utils.py0000664000175000017500000001563613656752032023250 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import xml.dom.minidom as xml import pkg_resources as pkg import testtools from sahara.utils import xmlutils as x from sahara import version class XMLUtilsTestCase(testtools.TestCase): def setUp(self): super(XMLUtilsTestCase, self).setUp() def test_load_xml_defaults(self): self.assertEqual( [{'name': u'name1', 'value': u'value1', 'description': 'descr1'}, {'name': u'name2', 'value': u'value2', 'description': 'descr2'}, {'name': u'name3', 'value': '', 'description': 'descr3'}, {'name': u'name4', 'value': '', 'description': 'descr4'}, {'name': u'name5', 'value': u'value5', 'description': ''}], x.load_hadoop_xml_defaults( 'tests/unit/resources/test-default.xml')) def test_parse_xml_with_name_and_value(self): file_name = 'tests/unit/resources/test-default.xml' fname = pkg.resource_filename( version.version_info.package, file_name) with open(fname, "r") as f: doc = "".join(line.strip() for line in f) self.assertEqual( [{'name': u'name1', 'value': u'value1'}, {'name': u'name2', 'value': u'value2'}, {'name': u'name3', 'value': ''}, {'name': u'name4', 'value': ''}, {'name': u'name5', 'value': u'value5'}], x.parse_hadoop_xml_with_name_and_value(doc) ) def test_adjust_description(self): self.assertEqual("", x._adjust_field("\n")) self.assertEqual("", x._adjust_field("\n ")) self.assertEqual("abcdef", x._adjust_field(u"abc\n def\n ")) self.assertEqual("abc de f", x._adjust_field("abc d\n e f\n")) self.assertEqual("abc", x._adjust_field("a\tb\t\nc")) def test_create_hadoop_xml(self): conf = x.load_hadoop_xml_defaults( 'tests/unit/resources/test-default.xml') self.assertEqual(""" name1 some_val1 name2 2 """, x.create_hadoop_xml({'name1': 'some_val1', 'name2': 2}, conf),) def test_add_property_to_configuration(self): doc = self.create_default_doc() x.add_properties_to_configuration(doc, 'test', {'': 'empty1', None: 'empty2'}) self.assertEqual(""" """, doc.toprettyxml(indent=" ")) test_conf = {'name1': 'value1', 'name2': 'value2'} x.add_properties_to_configuration(doc, 'test', test_conf) self.assertEqual(""" name1 value1 name2 value2 """, doc.toprettyxml(indent=" ")) x.add_property_to_configuration(doc, 'name3', 'value3') self.assertEqual(""" name1 value1 name2 value2 name3 value3 """, doc.toprettyxml(indent=" ")) def test_get_if_not_exist_and_add_text_element(self): doc = self.create_default_doc() x.get_and_create_if_not_exist(doc, 'test', 'tag_to_add') self.assertEqual(""" """, doc.toprettyxml(indent=" ")) x.add_text_element_to_tag(doc, 'tag_to_add', 'p', 'v') self.assertEqual("""

v

""", doc.toprettyxml(indent=" ")) def test_get_if_not_exist_and_add_to_element(self): doc = self.create_default_doc() elem = x.get_and_create_if_not_exist(doc, 'test', 'tag_to_add') x.add_text_element_to_element(doc, elem, 'p', 'v') self.assertEqual("""

v

""", doc.toprettyxml(indent=" ")) def test_add_tagged_list(self): doc = self.create_default_doc() x.add_tagged_list(doc, 'test', 'list_item', ['a', 'b']) self.assertEqual(""" a b """, doc.toprettyxml(indent=" ")) def test_add_equal_separated_dict(self): doc = self.create_default_doc() x.add_equal_separated_dict(doc, 'test', 'dict_item', {'': 'empty1', None: 'empty2'}) self.assertEqual(""" """, doc.toprettyxml(indent=" ")) x.add_equal_separated_dict(doc, 'test', 'dict_item', {'a': 'b', 'c': 'd'}) self.assertEqual(""" a=b c=d """, doc.toprettyxml(indent=" ")) def create_default_doc(self): doc = xml.Document() test = doc.createElement('test') doc.appendChild(test) return doc def _get_xml_text(self, strip): doc = x.load_xml_document("service/edp/resources/workflow.xml", strip) x.add_child(doc, 'action', 'java') x.add_text_element_to_tag(doc, 'java', 'sometag', 'somevalue') return doc.toprettyxml(indent=" ").split("\n") def test_load_xml_document_strip(self): # Get the lines from the xml docs stripped = set(self._get_xml_text(True)) unstripped = set(self._get_xml_text(False)) # Prove they're different diff = stripped.symmetric_difference(unstripped) self.assertGreater(len(diff), 0) # Prove the differences are only blank lines non_blank_diffs = [l for l in diff if not l.isspace()] self.assertEqual(0, len(non_blank_diffs)) sahara-12.0.0/sahara/tests/unit/utils/test_cluster_progress_ops.py0000664000175000017500000002103513656752032025504 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils from sahara import conductor from sahara import context from sahara.tests.unit import base from sahara.tests.unit.conductor import test_api from sahara.utils import cluster_progress_ops as cpo class FakeInstance(object): def __init__(self): self.id = uuidutils.generate_uuid() self.name = uuidutils.generate_uuid() self.cluster_id = uuidutils.generate_uuid() self.node_group_id = uuidutils.generate_uuid() self.instance_id = uuidutils.generate_uuid() self.instance_name = uuidutils.generate_uuid() class ClusterProgressOpsTest(base.SaharaWithDbTestCase): def setUp(self): super(ClusterProgressOpsTest, self).setUp() self.api = conductor.API def _make_sample(self): ctx = context.ctx() cluster = self.api.cluster_create(ctx, test_api.SAMPLE_CLUSTER) return ctx, cluster def test_update_provisioning_steps(self): ctx, cluster = self._make_sample() step_id1 = self.api.cluster_provision_step_add(ctx, cluster.id, { "step_name": "some_name1", "total": 2, }) self.api.cluster_event_add(ctx, step_id1, { "event_info": "INFO", "successful": True }) self.api.cluster_provision_progress_update(ctx, cluster.id) # check that we have correct provision step result_cluster = self.api.cluster_get(ctx, cluster.id) result_step = result_cluster.provision_progress[0] self.assertIsNone(result_step.successful) # check updating in case of successful provision step self.api.cluster_event_add(ctx, step_id1, { "event_info": "INFO", "successful": True }) self.api.cluster_provision_progress_update(ctx, cluster.id) result_cluster = self.api.cluster_get(ctx, cluster.id) result_step = result_cluster.provision_progress[0] self.assertTrue(result_step.successful) # check updating in case of failed provision step step_id2 = self.api.cluster_provision_step_add(ctx, cluster.id, { "step_name": "some_name1", "total": 2, }) self.api.cluster_event_add(ctx, step_id2, { "event_info": "INFO", "successful": False, }) self.api.cluster_provision_progress_update(ctx, cluster.id) result_cluster = self.api.cluster_get(ctx, cluster.id) for step in result_cluster.provision_progress: if step.id == step_id2: self.assertFalse(step.successful) # check that it's possible to add provision step after failed step step_id3 = cpo.add_provisioning_step(cluster.id, "some_name", 2) self.assertEqual( step_id3, cpo.get_current_provisioning_step(cluster.id)) def test_get_cluster_events(self): ctx, cluster = self._make_sample() step_id1 = self.api.cluster_provision_step_add(ctx, cluster.id, { 'step_name': "some_name1", 'total': 3, }) step_id2 = self.api.cluster_provision_step_add(ctx, cluster.id, { 'step_name': "some_name", 'total': 2, }) self.api.cluster_event_add(ctx, step_id1, { "event_info": "INFO", 'successful': True, }) self.api.cluster_event_add(ctx, step_id2, { "event_info": "INFO", 'successful': True, }) cluster = self.api.cluster_get(context.ctx(), cluster.id, True) for step in cluster.provision_progress: self.assertEqual(1, len(step.events)) def _make_checks(self, instance_info, sleep=True): ctx = context.ctx() if sleep: context.sleep(2) current_instance_info = ctx.current_instance_info self.assertEqual(instance_info, current_instance_info) def test_instance_context_manager(self): fake_instances = [FakeInstance() for _ in range(50)] # check that InstanceContextManager works fine sequentially for instance in fake_instances: info = context.InstanceInfo( None, instance.id, instance.name, None) with context.InstanceInfoManager(info): self._make_checks(info, sleep=False) # check that InstanceContextManager works fine in parallel with context.ThreadGroup() as tg: for instance in fake_instances: info = context.InstanceInfo( None, instance.id, instance.name, None) with context.InstanceInfoManager(info): tg.spawn("make_checks", self._make_checks, info) @cpo.event_wrapper(True) def _do_nothing(self): pass @mock.patch('sahara.utils.cluster_progress_ops._find_in_args') @mock.patch('sahara.utils.cluster.check_cluster_exists') def test_event_wrapper(self, p_check_cluster_exists, p_find): self.override_config("disable_event_log", True) self._do_nothing() self.assertEqual(0, p_find.call_count) self.override_config("disable_event_log", False) p_find.return_value = FakeInstance() p_check_cluster_exists.return_value = False self._do_nothing() self.assertEqual(1, p_find.call_count) self.assertEqual(1, p_check_cluster_exists.call_count) def test_cluster_get_with_events(self): ctx, cluster = self._make_sample() step_id = cpo.add_provisioning_step(cluster.id, "Some name", 3) self.api.cluster_event_add(ctx, step_id, { 'event_info': "INFO", 'successful': True}) cluster = self.api.cluster_get(ctx, cluster.id, True) steps = cluster.provision_progress step = steps[0] self.assertEqual("Some name", step.step_name) self.assertEqual(3, step.total) self.assertEqual("INFO", step.events[0].event_info) @mock.patch('sahara.context.ctx') @mock.patch( 'sahara.utils.cluster_progress_ops.get_current_provisioning_step', return_value='step_id') @mock.patch('sahara.utils.cluster_progress_ops.conductor') def test_add_successful_event(self, conductor, get_step, ctx): instance = FakeInstance() self.override_config("disable_event_log", True) cpo.add_successful_event(instance) self.assertEqual(0, conductor.cluster_event_add.call_count) self.override_config("disable_event_log", False) cpo.add_successful_event(instance) self.assertEqual(1, conductor.cluster_event_add.call_count) args, kwargs = conductor.cluster_event_add.call_args self.assertEqual('step_id', args[1]) req_dict = { 'successful': True, 'node_group_id': instance.node_group_id, 'instance_id': instance.instance_id, 'instance_name': instance.instance_name, 'event_info': None, } self.assertEqual(req_dict, args[2]) @mock.patch('sahara.context.ctx') @mock.patch( 'sahara.utils.cluster_progress_ops.get_current_provisioning_step', return_value='step_id') @mock.patch('sahara.utils.cluster_progress_ops.conductor') def test_add_fail_event(self, conductor, get_step, ctx): instance = FakeInstance() self.override_config("disable_event_log", True) cpo.add_fail_event(instance, Exception('oops')) self.assertEqual(0, conductor.cluster_event_add.call_count) self.override_config("disable_event_log", False) cpo.add_fail_event(instance, Exception('oops')) self.assertEqual(1, conductor.cluster_event_add.call_count) args, kwargs = conductor.cluster_event_add.call_args self.assertEqual('step_id', args[1]) req_dict = { 'successful': False, 'node_group_id': instance.node_group_id, 'instance_id': instance.instance_id, 'instance_name': instance.instance_name, 'event_info': 'oops', } self.assertEqual(req_dict, args[2]) sahara-12.0.0/sahara/tests/unit/utils/test_poll_utils.py0000664000175000017500000001241113656752032023402 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import six import testtools from sahara import context from sahara.tests.unit import base from sahara.utils import poll_utils class FakeCluster(object): def __init__(self, cluster_configs): self.cluster_configs = cluster_configs class FakeOption(object): def __init__(self, default_value, section, name): self.default_value = default_value self.name = name self.applicable_target = section class TestPollUtils(base.SaharaTestCase): def setUp(self): super(TestPollUtils, self).setUp() context.sleep = mock.Mock() @mock.patch('sahara.utils.poll_utils.LOG.debug') def test_poll_success(self, logger): poll_utils.poll(**{'get_status': lambda: True, 'kwargs': {}, 'timeout': 5, 'sleep': 3}) expected_call = mock.call( 'Operation was executed successfully in timeout 5') self.assertEqual(1, logger.call_count) self.assertEqual([expected_call], logger.call_args_list) @mock.patch('sahara.utils.poll_utils._get_consumed') def test_poll_failed_first_scenario(self, get_consumed): get_consumed.return_value = 0 message = "" try: poll_utils.poll( **{'get_status': lambda: False, 'kwargs': {}, 'timeout': 0, 'sleep': 3}) except Exception as e: message = six.text_type(e) if message.find('Error ID') != -1: message = message.split("\n")[0] expected_message = "'Operation' timed out after 0 second(s)" self.assertEqual(expected_message, message) @mock.patch('sahara.utils.poll_utils._get_consumed') def test_poll_failed_second_scenario(self, get_consumed): get_consumed.return_value = 0 message = "" try: poll_utils.poll( **{'get_status': lambda: False, 'kwargs': {}, 'timeout': 0, 'sleep': 3, 'timeout_name': "some timeout"}) except Exception as e: message = six.text_type(e) if message.find('Error ID') != -1: message = message.split("\n")[0] expected_message = ("'Operation' timed out after 0 second(s) and " "following timeout was violated: some timeout") self.assertEqual(expected_message, message) @mock.patch('sahara.utils.poll_utils.LOG.debug') @mock.patch('sahara.utils.cluster.check_cluster_exists') def test_plugin_poll_first_scenario(self, cluster_exists, logger): cluster_exists.return_value = True fake_get_status = mock.Mock() fake_get_status.side_effect = [False, False, True] fake_kwargs = {'kwargs': {'cat': 'tom', 'bond': 'james bond'}} poll_utils.plugin_option_poll( FakeCluster({}), fake_get_status, FakeOption(5, 'target', 'name'), 'fake_operation', 5, **fake_kwargs) expected_call = mock.call('Operation with name fake_operation was ' 'executed successfully in timeout 5') self.assertEqual([expected_call], logger.call_args_list) @mock.patch('sahara.utils.poll_utils.LOG.debug') @mock.patch('sahara.utils.cluster.check_cluster_exists') def test_plugin_poll_second_scenario(self, cluster_exists, logger): cluster_exists.return_value = False fake_get_status = mock.Mock() fake_get_status.side_effect = [False, False, True] fake_kwargs = {'kwargs': {'cat': 'tom', 'bond': 'james bond'}} poll_utils.plugin_option_poll( FakeCluster({'target': {'name': 7}}), fake_get_status, FakeOption(5, 'target', 'name'), 'fake_operation', 5, **fake_kwargs) expected_call = mock.call('Operation with name fake_operation was ' 'executed successfully in timeout 7') self.assertEqual([expected_call], logger.call_args_list) def test_poll_exception_strategy_first_scenario(self): fake_get_status = mock.Mock() fake_get_status.side_effect = [False, ValueError()] with testtools.ExpectedException(ValueError): poll_utils.poll(fake_get_status) def test_poll_exception_strategy_second_scenario(self): fake_get_status = mock.Mock() fake_get_status.side_effect = [False, ValueError()] poll_utils.poll(fake_get_status, exception_strategy='mark_as_true') self.assertEqual(2, fake_get_status.call_count) def test_poll_exception_strategy_third_scenario(self): fake_get_status = mock.Mock() fake_get_status.side_effect = [False, ValueError(), True] poll_utils.poll(fake_get_status, exception_strategy='mark_as_false') self.assertEqual(3, fake_get_status.call_count) sahara-12.0.0/sahara/tests/unit/utils/test_api.py0000664000175000017500000000636113656752032021774 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from unittest import mock import testtools from sahara.utils import api from sahara.utils import types from sahara.utils import wsgi class APIUtilsTest(testtools.TestCase): class FakeCluster(object): def to_dict(self): return {"id": 42, "name": "myFirstCluster"} page = types.Page([FakeCluster()]) response = {"clusters": [ { "id": 42, "name": "myFirstCluster" } ] } @mock.patch('flask.request') @mock.patch('flask.Response') def test_render_pagination(self, flask, request): serializer = wsgi.JSONDictSerializer() request.status_code = 200 api.render(self.page, 'application/json', 200, name='clusters') body = serializer.serialize(self.response) flask.assert_called_with( response=body, status=200, mimetype='application/json') self.page.prev, self.page.next = 35, 49 api.render(self.page, 'application/json', 200, name='clusters') paginate_response = copy.copy(self.response) paginate_response["markers"] = \ {"prev": 35, "next": 49} body = serializer.serialize(paginate_response) flask.assert_called_with( response=body, status=200, mimetype='application/json') self.page.prev, self.page.next = 7, None api.render(self.page, 'application/json', 200, name='clusters') paginate_response = copy.copy(self.response) paginate_response["markers"] = {"prev": 7, "next": None} body = serializer.serialize(paginate_response) flask.assert_called_with( response=body, status=200, mimetype='application/json') self.page.prev, self.page.next = None, 14 api.render(self.page, 'application/json', 200, name='clusters') paginate_response = copy.copy(self.response) paginate_response["markers"] = {"prev": None, "next": 14} body = serializer.serialize(paginate_response) flask.assert_called_with( response=body, status=200, mimetype='application/json') self.page.prev, self.page.next = None, 11 api.render(self.page, 'application/json', 200, name='clusters') paginate_response = copy.copy(self.response) paginate_response["markers"] = \ {"prev": None, "next": 11} body = serializer.serialize(paginate_response) flask.assert_called_with( response=body, status=200, mimetype='application/json') self.page.prev, self.page.next = None, 11 api.render(self.page, 'application/json', 200, name='clusters') sahara-12.0.0/sahara/tests/unit/utils/test_configs.py0000664000175000017500000000253713656752032022654 0ustar zuulzuul00000000000000# Copyright (c) 2015 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from sahara.utils import configs class ConfigsTestCase(testtools.TestCase): def test_merge_configs(self): a = { 'HDFS': { 'param1': 'value1', 'param2': 'value2' } } b = { 'HDFS': { 'param1': 'value3', 'param3': 'value4' }, 'YARN': { 'param5': 'value5' } } res = configs.merge_configs(a, b) expected = { 'HDFS': { 'param1': 'value3', 'param2': 'value2', 'param3': 'value4' }, 'YARN': { 'param5': 'value5' } } self.assertEqual(expected, res) sahara-12.0.0/sahara/tests/unit/utils/test_types.py0000664000175000017500000000170713656752032022366 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from sahara.utils import types class TypesTestCase(testtools.TestCase): def test_is_int(self): self.assertTrue(types.is_int('1')) self.assertTrue(types.is_int('0')) self.assertTrue(types.is_int('-1')) self.assertFalse(types.is_int('1.1')) self.assertFalse(types.is_int('ab')) self.assertFalse(types.is_int('')) sahara-12.0.0/sahara/tests/unit/utils/test_heat.py0000664000175000017500000000500413656752032022135 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock from sahara import exceptions as ex from sahara.utils.openstack import heat as h def stack(status, upd_time=None): status_reason = status status = status[status.index('_') + 1:] return mock.Mock(status=status, updated_time=upd_time, stack_status_reason=status_reason) class TestClusterStack(testtools.TestCase): @mock.patch('sahara.utils.openstack.heat.get_stack') @mock.patch("sahara.context.sleep", return_value=None) def test_wait_completion(self, sleep, client): cl = mock.Mock(stack_name='cluster') client.side_effect = [stack( 'CREATE_IN_PROGRESS'), stack('CREATE_COMPLETE')] h.wait_stack_completion(cl) self.assertEqual(2, client.call_count) client.side_effect = [ stack('UPDATE_IN_PROGRESS'), stack('UPDATE_COMPLETE')] h.wait_stack_completion(cl) self.assertEqual(4, client.call_count) client.side_effect = [ stack('DELETE_IN_PROGRESS'), stack('DELETE_COMPLETE')] h.wait_stack_completion(cl) self.assertEqual(6, client.call_count) client.side_effect = [ stack('CREATE_COMPLETE'), stack('CREATE_COMPLETE'), stack('UPDATE_IN_PROGRESS'), stack('UPDATE_COMPLETE', 1)] h.wait_stack_completion(cl, is_update=True) self.assertEqual(10, client.call_count) client.side_effect = [stack('UPDATE_COMPLETE'), stack( 'UPDATE_IN_PROGRESS'), stack('UPDATE_COMPLETE', 1)] h.wait_stack_completion(cl, is_update=True) self.assertEqual(13, client.call_count) client.side_effect = [ stack('CREATE_IN_PROGRESS'), stack('CREATE_FAILED')] with testtools.ExpectedException( ex.HeatStackException, value_re=("Heat stack failed with status " "CREATE_FAILED\nError ID: .*")): h.wait_stack_completion(cl) sahara-12.0.0/sahara/tests/unit/utils/openstack/0000775000175000017500000000000013656752227021601 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/utils/openstack/__init__.py0000664000175000017500000000000013656752032023672 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/utils/openstack/test_images.py0000664000175000017500000000675513656752032024466 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.tests.unit import base from sahara.utils.openstack import images as sahara_images class FakeImage(object): def __init__(self, name, tags, username): self.name = name self.tags = tags self.username = username class TestImages(base.SaharaTestCase): def setUp(self): super(TestImages, self).setUp() self.override_config('auth_url', 'https://127.0.0.1:8080/v3/', 'trustee') @mock.patch('sahara.utils.openstack.base.url_for', return_value='') def test_list_registered_images(self, url_for_mock): some_images = [ FakeImage('foo', ['bar', 'baz'], 'test'), FakeImage('baz', [], 'test'), FakeImage('spam', [], "")] with mock.patch( 'sahara.utils.openstack.images.SaharaImageManager.list', return_value=some_images): manager = sahara_images.image_manager() images = manager.list_registered() self.assertEqual(2, len(images)) images = manager.list_registered(name='foo') self.assertEqual(1, len(images)) self.assertEqual('foo', images[0].name) self.assertEqual('test', images[0].username) images = manager.list_registered(name='eggs') self.assertEqual(0, len(images)) images = manager.list_registered(tags=['bar']) self.assertEqual(1, len(images)) self.assertEqual('foo', images[0].name) images = manager.list_registered(tags=['bar', 'eggs']) self.assertEqual(0, len(images)) @mock.patch('sahara.utils.openstack.images.SaharaImageManager.set_meta') def test_set_image_info(self, set_meta): with mock.patch('sahara.utils.openstack.base.url_for'): manager = sahara_images.image_manager() manager.set_image_info('id', 'ubuntu') self.assertEqual( ('id', {'_sahara_username': 'ubuntu'}), set_meta.call_args[0]) manager.set_image_info('id', 'ubuntu', 'descr') self.assertEqual( ('id', {'_sahara_description': 'descr', '_sahara_username': 'ubuntu'}), set_meta.call_args[0]) @mock.patch('sahara.utils.openstack.images.SaharaImageManager.get') @mock.patch('sahara.utils.openstack.images.SaharaImageManager.delete_meta') def test_unset_image_info(self, delete_meta, get_image): manager = sahara_images.image_manager() image = mock.MagicMock() image.tags = ['fake', 'fake_2.0'] image.username = 'ubuntu' image.description = 'some description' get_image.return_value = image manager.unset_image_info('id') self.assertEqual( ('id', ['_sahara_tag_fake', '_sahara_tag_fake_2.0', '_sahara_description', '_sahara_username']), delete_meta.call_args[0]) sahara-12.0.0/sahara/tests/unit/utils/openstack/test_swift.py0000664000175000017500000000323413656752032024342 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.tests.unit import base as testbase from sahara.utils.openstack import swift class SwiftClientTest(testbase.SaharaTestCase): @mock.patch('sahara.swift.swift_helper.retrieve_tenant') @mock.patch('sahara.swift.utils.retrieve_auth_url') @mock.patch('swiftclient.Connection') def test_client(self, swift_connection, retrieve_auth_url, retrieve_tenant): swift.client('testuser', '12345') self.assertEqual(1, swift_connection.call_count) @mock.patch('sahara.utils.openstack.base.url_for') @mock.patch('swiftclient.Connection') @mock.patch('sahara.utils.openstack.keystone.token_from_auth') @mock.patch('sahara.utils.openstack.keystone.auth_for_proxy') def test_client_with_trust(self, auth_for_proxy, token_from_auth, swift_connection, url_for): swift.client('testuser', '12345', 'test_trust') self.assertEqual(1, auth_for_proxy.call_count) self.assertEqual(1, token_from_auth.call_count) self.assertEqual(1, swift_connection.call_count) sahara-12.0.0/sahara/tests/unit/utils/openstack/test_base.py0000664000175000017500000002513113656752032024120 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from cinderclient import exceptions as cinder_exc from heatclient import exc as heat_exc from keystoneauth1 import exceptions as keystone_exc from neutronclient.common import exceptions as neutron_exc from novaclient import exceptions as nova_exc from sahara import exceptions as sahara_exc from sahara.tests.unit import base as testbase from sahara.utils.openstack import base class TestBase(testbase.SaharaTestCase): def test_url_for_regions(self): service_catalog = ( '[{"endpoints": ' ' [{"adminURL": "http://192.168.0.5:8774/v2", ' ' "region": "RegionOne", ' ' "id": "83d12c9ad2d647ecab7cbe91adb8666b", ' ' "internalURL": "http://192.168.0.5:8774/v2", ' ' "publicURL": "http://172.18.184.5:8774/v2"}, ' ' {"adminURL": "http://192.168.0.6:8774/v2", ' ' "region": "RegionTwo", ' ' "id": "07c5a555176246c783d8f0497c98537b", ' ' "internalURL": "http://192.168.0.6:8774/v2", ' ' "publicURL": "http://172.18.184.6:8774/v2"}], ' ' "endpoints_links": [], ' ' "type": "compute", ' ' "name": "nova"}]') self.override_config("os_region_name", "RegionOne") self.assertEqual("http://192.168.0.5:8774/v2", base.url_for(service_catalog, "compute")) self.override_config("os_region_name", "RegionTwo") self.assertEqual("http://192.168.0.6:8774/v2", base.url_for(service_catalog, "compute")) class AuthUrlTest(testbase.SaharaTestCase): def test_retrieve_auth_url_api_v3(self): self.override_config('use_identity_api_v3', True) correct = "https://127.0.0.1:8080/v3" def _assert(uri): self.override_config('auth_url', uri, 'trustee') self.assertEqual(correct, base.retrieve_auth_url()) _assert("%s/" % correct) _assert("https://127.0.0.1:8080") _assert("https://127.0.0.1:8080/") _assert("https://127.0.0.1:8080/v2.0") _assert("https://127.0.0.1:8080/v2.0/") _assert("https://127.0.0.1:8080/v3") _assert("https://127.0.0.1:8080/v3/") @mock.patch("sahara.utils.openstack.base.url_for") def test_retrieve_auth_url_api_v3_without_port(self, mock_url_for): self.override_config('use_identity_api_v3', True) self.setup_context(service_catalog=True) correct = "https://127.0.0.1/v3" def _assert(uri): mock_url_for.return_value = uri self.assertEqual(correct, base.retrieve_auth_url()) _assert("%s/" % correct) _assert("https://127.0.0.1") _assert("https://127.0.0.1/") _assert("https://127.0.0.1/v2.0") _assert("https://127.0.0.1/v2.0/") _assert("https://127.0.0.1/v3") _assert("https://127.0.0.1/v3/") @mock.patch("sahara.utils.openstack.base.url_for") def test_retrieve_auth_url_api_v3_path_present(self, mock_url_for): self.override_config('use_identity_api_v3', True) self.setup_context(service_catalog=True) correct = "https://127.0.0.1/identity/v3" def _assert(uri): mock_url_for.return_value = uri self.assertEqual(correct, base.retrieve_auth_url()) _assert("%s" % correct) _assert("%s/" % correct) _assert("https://127.0.0.1/identity") _assert("https://127.0.0.1/identity/") def test_retrieve_auth_url_api_v20(self): self.override_config('use_identity_api_v3', False) correct = "https://127.0.0.1:8080/v2.0" def _assert(uri): self.override_config('auth_url', uri, 'trustee') self.assertEqual(correct, base.retrieve_auth_url()) _assert("%s/" % correct) _assert("https://127.0.0.1:8080") _assert("https://127.0.0.1:8080/") _assert("https://127.0.0.1:8080/v2.0") _assert("https://127.0.0.1:8080/v2.0/") _assert("https://127.0.0.1:8080/v3") _assert("https://127.0.0.1:8080/v3/") @mock.patch("sahara.utils.openstack.base.url_for") def test_retrieve_auth_url_api_v20_without_port(self, mock_url_for): self.override_config('use_identity_api_v3', False) self.setup_context(service_catalog=True) correct = "https://127.0.0.1/v2.0" def _assert(uri): mock_url_for.return_value = uri self.assertEqual(correct, base.retrieve_auth_url()) _assert("%s/" % correct) _assert("https://127.0.0.1") _assert("https://127.0.0.1/") _assert("https://127.0.0.1/v2.0") _assert("https://127.0.0.1/v2.0/") _assert("https://127.0.0.1/v3") _assert("https://127.0.0.1/v3/") class ExecuteWithRetryTest(testbase.SaharaTestCase): def setUp(self): super(ExecuteWithRetryTest, self).setUp() self.fake_client_call = mock.MagicMock() self.fake_client_call.__name__ = 'fake_client_call' self.override_config('retries_number', 2, 'retries') @mock.patch('sahara.context.sleep') def _check_error_without_retry(self, error, code, m_sleep): self.fake_client_call.side_effect = error(code) self.assertRaises(error, base.execute_with_retries, self.fake_client_call) self.assertEqual(1, self.fake_client_call.call_count) self.fake_client_call.reset_mock() @mock.patch('sahara.context.sleep') def _check_error_with_retry(self, error, code, m_sleep): self.fake_client_call.side_effect = error(code) self.assertRaises(sahara_exc.MaxRetriesExceeded, base.execute_with_retries, self.fake_client_call) self.assertEqual(3, self.fake_client_call.call_count) self.fake_client_call.reset_mock() def test_novaclient_calls_without_retry(self): # check that following errors will not be retried self._check_error_without_retry(nova_exc.BadRequest, 400) self._check_error_without_retry(nova_exc.Unauthorized, 401) self._check_error_without_retry(nova_exc.Forbidden, 403) self._check_error_without_retry(nova_exc.NotFound, 404) self._check_error_without_retry(nova_exc.MethodNotAllowed, 405) self._check_error_without_retry(nova_exc.Conflict, 409) self._check_error_without_retry(nova_exc.HTTPNotImplemented, 501) def test_novaclient_calls_with_retry(self): # check that following errors will be retried self._check_error_with_retry(nova_exc.OverLimit, 413) self._check_error_with_retry(nova_exc.RateLimit, 429) def test_cinderclient_calls_without_retry(self): # check that following errors will not be retried self._check_error_without_retry(cinder_exc.BadRequest, 400) self._check_error_without_retry(cinder_exc.Unauthorized, 401) self._check_error_without_retry(cinder_exc.Forbidden, 403) self._check_error_without_retry(cinder_exc.NotFound, 404) self._check_error_without_retry(nova_exc.HTTPNotImplemented, 501) def test_cinderclient_calls_with_retry(self): # check that following error will be retried self._check_error_with_retry(cinder_exc.OverLimit, 413) def test_neutronclient_calls_without_retry(self): # check that following errors will not be retried # neutron exception expects string in constructor self._check_error_without_retry(neutron_exc.BadRequest, "400") self._check_error_without_retry(neutron_exc.Forbidden, "403") self._check_error_without_retry(neutron_exc.NotFound, "404") self._check_error_without_retry(neutron_exc.Conflict, "409") def test_neutronclient_calls_with_retry(self): # check that following errors will be retried # neutron exception expects string in constructor self._check_error_with_retry(neutron_exc.InternalServerError, "500") self._check_error_with_retry(neutron_exc.ServiceUnavailable, "503") def test_heatclient_calls_without_retry(self): # check that following errors will not be retried self._check_error_without_retry(heat_exc.HTTPBadRequest, 400) self._check_error_without_retry(heat_exc.HTTPUnauthorized, 401) self._check_error_without_retry(heat_exc.HTTPForbidden, 403) self._check_error_without_retry(heat_exc.HTTPNotFound, 404) self._check_error_without_retry(heat_exc.HTTPMethodNotAllowed, 405) self._check_error_without_retry(heat_exc.HTTPConflict, 409) self._check_error_without_retry(heat_exc.HTTPUnsupported, 415) self._check_error_without_retry(heat_exc.HTTPNotImplemented, 501) def test_heatclient_calls_with_retry(self): # check that following errors will be retried self._check_error_with_retry(heat_exc.HTTPInternalServerError, 500) self._check_error_with_retry(heat_exc.HTTPBadGateway, 502) self._check_error_with_retry(heat_exc.HTTPServiceUnavailable, 503) def test_keystoneclient_calls_without_retry(self): # check that following errors will not be retried self._check_error_without_retry(keystone_exc.BadRequest, 400) self._check_error_without_retry(keystone_exc.Unauthorized, 401) self._check_error_without_retry(keystone_exc.Forbidden, 403) self._check_error_without_retry(keystone_exc.NotFound, 404) self._check_error_without_retry(keystone_exc.MethodNotAllowed, 405) self._check_error_without_retry(keystone_exc.Conflict, 409) self._check_error_without_retry(keystone_exc.UnsupportedMediaType, 415) self._check_error_without_retry(keystone_exc.HttpNotImplemented, 501) def test_keystoneclient_calls_with_retry(self): # check that following errors will be retried self._check_error_with_retry(keystone_exc.RequestTimeout, 408) self._check_error_with_retry(keystone_exc.InternalServerError, 500) self._check_error_with_retry(keystone_exc.BadGateway, 502) self._check_error_with_retry(keystone_exc.ServiceUnavailable, 503) self._check_error_with_retry(keystone_exc.GatewayTimeout, 504) sahara-12.0.0/sahara/tests/unit/utils/openstack/test_heat.py0000664000175000017500000000307513656752032024132 0ustar zuulzuul00000000000000# Copyright (c) 2017 Massachusetts Open Cloud # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock from sahara.utils.openstack import heat as heat_u class HeatClientTest(testtools.TestCase): @mock.patch('sahara.utils.openstack.heat.get_stack') @mock.patch('heatclient.client.Client') @mock.patch('sahara.utils.openstack.base.url_for') @mock.patch('sahara.service.sessions.cache') @mock.patch('sahara.context.ctx') def test_deleting(self, ctx, cache, url_for, heat, get_stack): class _FakeHeatStacks(object): def delete(self, stack): call_list.append("delete") call_list = None get_stack.return_value = None get_stack.side_effect = lambda *args, **kwargs: call_list.append("get") heat.return_value.stacks = _FakeHeatStacks() call_list = [] heat_u.lazy_delete_stack(mock.Mock()) self.assertEqual(call_list, ["delete"]) call_list = [] heat_u.delete_stack(mock.Mock()) self.assertEqual(call_list, ["delete", "get"]) sahara-12.0.0/sahara/tests/unit/utils/test_resources.py0000664000175000017500000000552213656752032023233 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.tests.unit import base from sahara.utils import resources class SimpleResourceTestCase(base.SaharaTestCase): def setUp(self): super(SimpleResourceTestCase, self).setUp() self.test_name = "test_res" self.test_info_0 = {"a": "a"} self.test_info_1 = {"b": "b"} def test_resource_init_attrs(self): r = resources.Resource(_name=self.test_name, _info=self.test_info_0) r.b = "b" self.assertEqual("a", r.a) self.assertEqual("b", r.__getattr__("b")) self.assertIn("b", r.__dict__) self.assertEqual(self.test_info_0, r._info) self.assertEqual(self.test_name, r._name) self.assertEqual(self.test_name, r.__resource_name__) def test_resource_to_dict(self): r = resources.Resource(_name=self.test_name, _info=self.test_info_0) self.assertEqual(self.test_info_0, r.to_dict()) self.assertEqual({self.test_name: self.test_info_0}, r.wrapped_dict) def test_resource_eq(self): r0 = resources.Resource(_name=self.test_name, _info=self.test_info_0) r1 = resources.Resource(_name=self.test_name, _info=self.test_info_1) self.assertNotEqual(r0, r1) def test_as_resource(self): r = resources.Resource(_name=self.test_name, _info=self.test_info_0) self.assertEqual(r, r.as_resource()) def test_repr(self): r = resources.Resource(_name=self.test_name, _info=self.test_info_0) dict_repr = self.test_info_0.__repr__() self.assertEqual("" % dict_repr, r.__repr__()) class InheritedBaseResourceTestCase(base.SaharaTestCase): def test_to_dict_no_filters(self): class A(resources.BaseResource): __filter_cols__ = [] test_a = A() test_a.some_attr = "some_value" a_dict = test_a.to_dict() self.assertEqual({"some_attr": "some_value"}, a_dict) def test_to_dict_with_filters_and_sa(self): class A(resources.BaseResource): __filter_cols__ = ["filtered"] test_a = A() test_a.some_attr = "some_value" test_a.filtered = "something_hidden" test_a._sa_instance_state = "some_sqlalchemy_magic" a_dict = test_a.to_dict() self.assertEqual({"some_attr": "some_value"}, a_dict) sahara-12.0.0/sahara/tests/unit/cli/0000775000175000017500000000000013656752227017221 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/cli/__init__.py0000664000175000017500000000000013656752032021312 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/cli/test_sahara_status.py0000664000175000017500000000202113656752032023461 0ustar zuulzuul00000000000000# Copyright (c) 2018 NEC, Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_upgradecheck.upgradecheck import Code from sahara.cli import sahara_status from sahara.tests.unit import base class TestUpgradeChecks(base.SaharaTestCase): def setUp(self): super(TestUpgradeChecks, self).setUp() self.cmd = sahara_status.Checks() def test__sample_check(self): check_result = self.cmd._sample_check() self.assertEqual( Code.SUCCESS, check_result.code) sahara-12.0.0/sahara/tests/unit/cli/image_pack/0000775000175000017500000000000013656752227021301 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/cli/image_pack/__init__.py0000664000175000017500000000000013656752032023372 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/cli/image_pack/test_image_pack_api.py0000664000175000017500000000551013656752032025616 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import sys guestfs = mock.Mock() sys.modules['guestfs'] = guestfs from sahara.cli.image_pack import api from sahara.tests.unit import base class TestSaharaImagePackAPI(base.SaharaTestCase): def setUp(self): super(TestSaharaImagePackAPI, self).setUp() def tearDown(self): super(TestSaharaImagePackAPI, self).tearDown() @mock.patch('sahara.cli.image_pack.api.guestfs') @mock.patch('sahara.cli.image_pack.api.plugins_base') @mock.patch('sahara.cli.image_pack.api.LOG') def test_pack_image_call(self, mock_log, mock_plugins_base, mock_guestfs): guest = mock.Mock() mock_guestfs.GuestFS = mock.Mock(return_value=guest) guest.inspect_os = mock.Mock(return_value=['/dev/something1']) plugin = mock.Mock() mock_plugins_base.PLUGINS = mock.Mock( get_plugin=mock.Mock(return_value=plugin)) api.pack_image( "image_path", "plugin_name", "plugin_version", {"anarg": "avalue"}, root_drive=None, test_only=False) guest.add_drive_opts.assert_called_with("image_path", format="qcow2") guest.set_network.assert_called_with(True) guest.launch.assert_called_once_with() guest.mount.assert_called_with('/dev/something1', '/') guest.sh.assert_called_with("echo Testing sudo without tty...") guest.sync.assert_called_once_with() guest.umount_all.assert_called_once_with() guest.close.assert_called_once_with() @mock.patch('sahara.cli.image_pack.api.plugins_base') def test_get_plugin_arguments(self, mock_plugins_base): api.setup_plugins() mock_plugins_base.setup_plugins.assert_called_once_with() mock_PLUGINS = mock.Mock() mock_plugins_base.PLUGINS = mock_PLUGINS mock_plugin = mock.Mock() mock_plugin.get_versions = mock.Mock(return_value=['1']) mock_plugin.get_image_arguments = mock.Mock( return_value=["Argument!"]) mock_PLUGINS.get_plugin = mock.Mock(return_value=mock_plugin) result = api.get_plugin_arguments('Plugin!') mock_plugin.get_versions.assert_called_once_with() mock_plugin.get_image_arguments.assert_called_once_with('1') self.assertEqual(result, {'1': ['Argument!']}) sahara-12.0.0/sahara/tests/unit/cli/test_sahara_cli.py0000664000175000017500000000537513656752032022724 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.cli import sahara_all from sahara.cli import sahara_api from sahara.cli import sahara_engine from sahara.tests.unit import base class TestSaharaCLI(base.SaharaTestCase): def setUp(self): super(TestSaharaCLI, self).setUp() modules = [ 'sahara.main.setup_common', 'oslo_service.wsgi.Server.__init__', 'oslo_service.wsgi.Loader' ] self.patchers = [] for module in modules: patch = mock.patch(module) patch.start() self.patchers.append(patch) mock_get_pl_patch = mock.patch('sahara.main.get_process_launcher') self.patchers.append(mock_get_pl_patch) self.mock_get_pl = mock_get_pl_patch.start() mock_start_server_patch = mock.patch( 'sahara.main.SaharaWSGIService.start') self.patchers.append(mock_start_server_patch) self.mock_start_server = mock_start_server_patch.start() def tearDown(self): super(TestSaharaCLI, self).tearDown() for patcher in reversed(self.patchers): patcher.stop() @mock.patch('sahara.main.setup_sahara_api') def test_main_start_api(self, mock_setup_sahara_api): sahara_api.main() self.mock_start_server.assert_called_once_with() self.mock_get_pl.return_value.wait.assert_called_once_with() @mock.patch('sahara.utils.rpc.RPCServer.get_service') @mock.patch('oslo_service.service.ProcessLauncher') @mock.patch('sahara.main._get_ops_driver') @mock.patch('sahara.service.ops.OpsServer') def test_main_start_engine(self, mock_ops_server, mock_get_ops_driver, mock_pl, mock_get_service): self.mock_get_pl.return_value = mock_pl mock_ops_server.return_value.get_service.return_value = ( mock_get_service) sahara_engine.main() mock_pl.launch_service.assert_called_once_with(mock_get_service) mock_pl.wait.assert_called_once_with() def test_main_start_all(self): sahara_all.main() self.mock_start_server.assert_called_once_with() self.mock_get_pl.return_value.wait.assert_called_once_with() sahara-12.0.0/sahara/tests/unit/resources/0000775000175000017500000000000013656752227020464 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/resources/dfs_admin_3_nodes.txt0000664000175000017500000000101513656752032024552 0ustar zuulzuul00000000000000Configured Capacity: 31706750976 (29.53 GB) Present Capacity: 29622116382 (27.59 GB) DFS Remaining: 29622018048 (27.59 GB) DFS Used: 98334 (96.03 KB) DFS Used%: 0% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Name: 10.155.0.94:50010 Decommission Status : Normal Name: 10.155.0.90:50010 Last contact: Tue Jul 16 12:00:07 UTC 2013 Configured Capacity: 10568916992 (9.84 GB) DFS Remaining%: 93.42% sahara-12.0.0/sahara/tests/unit/resources/dfs_admin_1_nodes.txt0000664000175000017500000000060413656752032024553 0ustar zuulzuul00000000000000Configured Capacity: 31706750976 (29.53 GB) Present Capacity: 29622116382 (27.59 GB) DFS Remaining: 29622018048 (27.59 GB) DFS Used: 98334 (96.03 KB) DFS Used%: 0% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Name: 10.155.0.94:50010 Decommission Status : Normal sahara-12.0.0/sahara/tests/unit/resources/dfs_admin_0_nodes.txt0000664000175000017500000000043513656752032024554 0ustar zuulzuul00000000000000Configured Capacity: 0 (0 KB) Present Capacity: 0 (0 KB) DFS Remaining: 0 (0 KB) DFS Used: 0 (0 KB) DFS Used%: �% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 0 (0 total, 0 dead) sahara-12.0.0/sahara/tests/unit/resources/test-default.xml0000664000175000017500000000135113656752032023601 0ustar zuulzuul00000000000000 name1 value1 descr1 name2 value2 descr2 name3 descr3 name4 descr4 name5 value5 sahara-12.0.0/sahara/tests/unit/testutils.py0000664000175000017500000000410613656752032021057 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import uuidutils from sahara.conductor import resource as r def create_cluster(name, tenant, plugin, version, node_groups, **kwargs): dct = {'id': uuidutils.generate_uuid(), 'name': name, 'tenant_id': tenant, 'plugin_name': plugin, 'hadoop_version': version, 'node_groups': node_groups, 'cluster_configs': {}, "sahara_info": {}, 'user_keypair_id': None, 'default_image_id': None, 'is_protected': False} dct.update(kwargs) return r.ClusterResource(dct) def make_ng_dict(name, flavor, processes, count, instances=None, volumes_size=None, node_configs=None, resource=False, **kwargs): node_configs = node_configs or {} instances = instances or [] dct = {'id': uuidutils.generate_uuid(), 'name': name, 'volumes_size': volumes_size, 'flavor_id': flavor, 'node_processes': processes, 'count': count, 'instances': instances, 'node_configs': node_configs, 'security_groups': None, 'auto_security_group': False, 'availability_zone': None, 'volumes_availability_zone': None, 'open_ports': [], 'is_proxy_gateway': False, 'volume_local_to_instance': False} dct.update(kwargs) if resource: return r.NodeGroupTemplateResource(dct) return dct def make_inst_dict(inst_id, inst_name, management_ip='1.2.3.4'): return {'instance_id': inst_id, 'instance_name': inst_name, 'management_ip': management_ip} sahara-12.0.0/sahara/tests/unit/test_context.py0000664000175000017500000001057713656752032021553 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import random from unittest import mock import fixtures import six import testtools from sahara import context from sahara import exceptions as ex rnd = random.Random() class ContextTest(testtools.TestCase): def setUp(self): super(ContextTest, self).setUp() self.useFixture(fixtures.FakeLogger('sahara')) ctx = context.Context('test_user', 'tenant_1', 'test_auth_token', {}, remote_semaphore='123') context.set_ctx(ctx) def _add_element(self, lst, i): context.sleep(rnd.uniform(0, 0.1)) lst.append(i) def _raise_test_exc(self, exc_msg): raise TestException(exc_msg) def test_thread_group_waits_threads(self): # That can fail with some probability, so making 5 attempts # Actually it takes around 1 second, so maybe we should # just remove it for _ in six.moves.range(5): lst = [] with context.ThreadGroup() as tg: for i in six.moves.range(400): tg.spawn('add %i' % i, self._add_element, lst, i) self.assertEqual(400, len(lst)) def test_thread_group_waits_threads_if_spawning_exception(self): lst = [] with testtools.ExpectedException(RuntimeError): with context.ThreadGroup() as tg: for i in six.moves.range(400): tg.spawn('add %i' % i, self._add_element, lst, i) raise RuntimeError() self.assertEqual(400, len(lst)) def test_thread_group_waits_threads_if_child_exception(self): lst = [] with testtools.ExpectedException(ex.ThreadException): with context.ThreadGroup() as tg: tg.spawn('raiser', self._raise_test_exc, 'exc') for i in six.moves.range(400): tg.spawn('add %i' % i, self._add_element, lst, i) self.assertEqual(400, len(lst)) def test_thread_group_handles_spawning_exception(self): with testtools.ExpectedException(TestException): with context.ThreadGroup(): raise TestException() def test_thread_group_handles_child_exception(self): try: with context.ThreadGroup() as tg: tg.spawn('raiser1', self._raise_test_exc, 'exc1') except ex.ThreadException as te: self.assertIn('exc1', six.text_type(te)) self.assertIn('raiser1', six.text_type(te)) def test_thread_group_prefers_spawning_exception(self): with testtools.ExpectedException(RuntimeError): with context.ThreadGroup() as tg: tg.spawn('raiser1', self._raise_test_exc, 'exc1') raise RuntimeError() def test_wrapper_does_not_set_exception(self): func = mock.MagicMock() tg = mock.MagicMock(exc=None, failed_thread=None) context._wrapper(None, 'test thread', tg, func) self.assertIsNone(tg.exc) self.assertIsNone(tg.failed_thread) def test_wrapper_catches_base_exception(self): func = mock.MagicMock() func.side_effect = BaseException() tg = mock.MagicMock(exc=None, failed_thread=None) context._wrapper(None, 'test thread', tg, func) self.assertIsNotNone(tg.exc) self.assertEqual('test thread', tg.failed_thread) def test_is_auth_capable_for_admin_ctx(self): ctx = context.ctx() self.assertFalse(ctx.is_auth_capable()) def test_is_auth_capable_for_user_ctx(self): existing_ctx = context.ctx() try: ctx = context.Context('test_user', 'tenant_1', 'test_auth_token', {"network": "aURL"}, remote_semaphore='123') self.assertTrue(ctx.is_auth_capable()) finally: context.set_ctx(existing_ctx) class TestException(Exception): pass sahara-12.0.0/sahara/tests/unit/plugins/0000775000175000017500000000000013656752227020133 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/plugins/__init__.py0000664000175000017500000000000013656752032022224 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/plugins/test_images.py0000664000175000017500000005014613656752032023011 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils import yaml from sahara import exceptions as ex from sahara.plugins import exceptions as p_ex from sahara.plugins import images from sahara.tests.unit import base as b class TestImages(b.SaharaTestCase): def test_package_spec(self): cls = images.SaharaPackageValidator validator = cls.from_spec("java", {}, []) self.assertIsInstance(validator, cls) self.assertEqual(str(validator.packages[0]), "java") validator = cls.from_spec({"java": {"version": "8"}}, {}, []) self.assertIsInstance(validator, cls) self.assertEqual(str(validator.packages[0]), "java-8") validator = cls.from_spec( [{"java": {"version": "8"}}, "hadoop"], {}, []) self.assertIsInstance(validator, cls) self.assertEqual(str(validator.packages[0]), "java-8") self.assertEqual(str(validator.packages[1]), "hadoop") def test_script_spec(self): cls = images.SaharaScriptValidator resource_roots = ['tests/unit/plugins'] validator = cls.from_spec('test_images.py', {}, resource_roots) self.assertIsInstance(validator, cls) self.assertEqual(validator.env_vars, ['test_only', 'distro']) validator = cls.from_spec( {'test_images.py': {'env_vars': ['extra-file', 'user']}}, {}, resource_roots) self.assertIsInstance(validator, cls) self.assertEqual(validator.env_vars, ['test_only', 'distro', 'extra-file', 'user']) def test_all_spec(self): cls = images.SaharaAllValidator validator_map = images.SaharaImageValidatorBase.get_validator_map() validator = cls.from_spec( [{'package': {'java': {'version': '8'}}}, {'package': 'hadoop'}], validator_map, []) self.assertIsInstance(validator, cls) self.assertEqual(len(validator.validators), 2) self.assertEqual(validator.validators[0].packages[0].name, 'java') self.assertEqual(validator.validators[1].packages[0].name, 'hadoop') def test_any_spec(self): cls = images.SaharaAnyValidator validator_map = images.SaharaImageValidatorBase.get_validator_map() validator = cls.from_spec( [{'package': {'java': {'version': '8'}}}, {'package': 'hadoop'}], validator_map, []) self.assertIsInstance(validator, cls) self.assertEqual(len(validator.validators), 2) self.assertEqual(validator.validators[0].packages[0].name, 'java') self.assertEqual(validator.validators[1].packages[0].name, 'hadoop') def test_os_case_spec(self): cls = images.SaharaOSCaseValidator validator_map = images.SaharaImageValidatorBase.get_validator_map() spec = [ {'redhat': [{'package': 'nfs-utils'}]}, {'debian': [{'package': 'nfs-common'}]} ] validator = cls.from_spec(spec, validator_map, []) self.assertIsInstance(validator, cls) self.assertEqual(len(validator.distros), 2) self.assertEqual(validator.distros[0].distro, 'redhat') self.assertEqual(validator.distros[1].distro, 'debian') redhat, debian = ( validator.distros[os].validator.validators[0].packages[0].name for os in range(2)) self.assertEqual(redhat, 'nfs-utils') self.assertEqual(debian, 'nfs-common') def test_sahara_image_validator_spec(self): cls = images.SaharaImageValidator validator_map = images.SaharaImageValidatorBase.get_validator_map() resource_roots = ['tests/unit/plugins'] spec = """ arguments: java-version: description: The version of java. default: openjdk required: false choices: - openjdk - oracle-java validators: - os_case: - redhat: - package: nfs-utils - debian: - package: nfs-common - any: - all: - package: java-1.8.0-openjdk-devel - argument_set: argument_name: java-version value: 1.8.0 - all: - package: java-1.7.0-openjdk-devel - argument_set: argument_name: java-version value: 1.7.0 - script: test_images.py - package: - hadoop - hadoop-libhdfs - hadoop-native - hadoop-pipes - hadoop-sbin - hadoop-lzo - lzo - lzo-devel - hadoop-lzo-native - argument_case: argument_name: JAVA_VERSION cases: 1.7.0: - script: test_images.py 1.8.0: - script: test_images.py """ spec = yaml.safe_load(spec) validator = cls.from_spec(spec, validator_map, resource_roots) validators = validator.validators self.assertIsInstance(validator, cls) self.assertEqual(len(validators), 5) self.assertIsInstance(validators[0], images.SaharaOSCaseValidator) self.assertIsInstance(validators[1], images.SaharaAnyValidator) self.assertIsInstance(validators[2], images.SaharaScriptValidator) self.assertIsInstance(validators[3], images.SaharaPackageValidator) self.assertIsInstance( validators[4], images.SaharaArgumentCaseValidator) self.assertEqual(1, len(validator.arguments)) self.assertEqual(validator.arguments['java-version'].required, False) self.assertEqual(validator.arguments['java-version'].default, 'openjdk') self.assertEqual(validator.arguments['java-version'].description, 'The version of java.') self.assertEqual(validator.arguments['java-version'].choices, ['openjdk', 'oracle-java']) def test_package_validator_redhat(self): cls = images.SaharaPackageValidator image_arguments = {"distro": 'centos'} packages = [cls.Package("java", "8")] validator = images.SaharaPackageValidator(packages) remote = mock.Mock() validator.validate(remote, test_only=True, image_arguments=image_arguments) remote.execute_command.assert_called_with( "rpm -q java-8", run_as_root=True) image_arguments = {"distro": 'fedora'} packages = [cls.Package("java", "8"), cls.Package("hadoop")] validator = images.SaharaPackageValidator(packages) remote = mock.Mock() remote.execute_command.side_effect = ( ex.RemoteCommandException("So bad!")) try: validator.validate(remote, test_only=True, image_arguments=image_arguments) except p_ex.ImageValidationError as e: self.assertIn("So bad!", e.message) remote.execute_command.assert_called_with( "rpm -q java-8 hadoop", run_as_root=True) self.assertEqual(remote.execute_command.call_count, 1) image_arguments = {"distro": 'redhat'} packages = [cls.Package("java", "8"), cls.Package("hadoop")] validator = images.SaharaPackageValidator(packages) remote = mock.Mock() def side_effect(call, run_as_root=False): if "rpm" in call: raise ex.RemoteCommandException("So bad!") remote.execute_command.side_effect = side_effect try: validator.validate(remote, test_only=False, image_arguments=image_arguments) except p_ex.ImageValidationError as e: self.assertIn("So bad!", e.message) self.assertEqual(remote.execute_command.call_count, 3) calls = [mock.call("rpm -q java-8 hadoop", run_as_root=True), mock.call("yum install -y java-8 hadoop", run_as_root=True), mock.call("rpm -q java-8 hadoop", run_as_root=True)] remote.execute_command.assert_has_calls(calls) def test_package_validator_debian(self): cls = images.SaharaPackageValidator image_arguments = {"distro": 'ubuntu'} packages = [cls.Package("java", "8")] validator = images.SaharaPackageValidator(packages) remote = mock.Mock() validator.validate(remote, test_only=True, image_arguments=image_arguments) remote.execute_command.assert_called_with( "dpkg -s java-8", run_as_root=True) image_arguments = {"distro": 'ubuntu'} packages = [cls.Package("java", "8"), cls.Package("hadoop")] validator = images.SaharaPackageValidator(packages) remote = mock.Mock() remote.execute_command.side_effect = ( ex.RemoteCommandException("So bad!")) try: validator.validate(remote, test_only=True, image_arguments=image_arguments) except p_ex.ImageValidationError as e: self.assertIn("So bad!", e.message) remote.execute_command.assert_called_with( "dpkg -s java-8 hadoop", run_as_root=True) self.assertEqual(remote.execute_command.call_count, 1) image_arguments = {"distro": 'ubuntu'} packages = [cls.Package("java", "8"), cls.Package("hadoop")] validator = images.SaharaPackageValidator(packages) remote = mock.Mock() remote.execute_command.side_effect = ( ex.RemoteCommandException("So bad!")) try: validator.validate(remote, test_only=False, image_arguments=image_arguments) except p_ex.ImageValidationError as e: self.assertIn("So bad!", e.message) self.assertEqual(remote.execute_command.call_count, 2) calls = [mock.call("dpkg -s java-8 hadoop", run_as_root=True), mock.call("DEBIAN_FRONTEND=noninteractive " + "apt-get -y install java-8 hadoop", run_as_root=True)] remote.execute_command.assert_has_calls(calls) @mock.patch('oslo_utils.uuidutils.generate_uuid') def test_script_validator(self, uuid): hash_value = '00000000-0000-0000-0000-000000000000' uuidutils.generate_uuid.return_value = hash_value cls = images.SaharaScriptValidator image_arguments = {"distro": 'centos'} cmd = b"It's dangerous to go alone. Run this." validator = cls(cmd, env_vars=image_arguments.keys(), output_var="distro") remote = mock.Mock( execute_command=mock.Mock( return_value=(0, 'fedora'))) validator.validate(remote, test_only=False, image_arguments=image_arguments) call = [mock.call('chmod +x /tmp/%(hash_value)s.sh' % {'hash_value': hash_value}, run_as_root=True), mock.call('/tmp/%(hash_value)s.sh' % {'hash_value': hash_value}, run_as_root=True)] remote.execute_command.assert_has_calls(call) self.assertEqual(image_arguments['distro'], 'fedora') def test_any_validator(self): cls = images.SaharaAnyValidator class FakeValidator(images.SaharaImageValidatorBase): def __init__(self, mock_validate): self.mock_validate = mock_validate def validate(self, remote, test_only=False, **kwargs): self.mock_validate(remote, test_only=test_only, **kwargs) # One success short circuits validation always_tells_the_truth = FakeValidator(mock.Mock()) validator = cls([always_tells_the_truth, always_tells_the_truth]) validator.validate(None, test_only=False) self.assertEqual(always_tells_the_truth.mock_validate.call_count, 1) # All failures fails, and calls with test_only=True on all first always_lies = FakeValidator( mock.Mock(side_effect=p_ex.ImageValidationError("Oh no!"))) validator = cls([always_lies, always_lies]) try: validator.validate(None, test_only=False) except p_ex.ImageValidationError: pass self.assertEqual(always_lies.mock_validate.call_count, 4) # But it fails after a first pass if test_only=True. always_lies = FakeValidator( mock.Mock(side_effect=p_ex.ImageValidationError("Oh no!"))) validator = cls([always_lies, always_lies]) try: validator.validate(None, test_only=True) except p_ex.ImageValidationError: pass self.assertEqual(always_lies.mock_validate.call_count, 2) # One failure doesn't end iteration. always_tells_the_truth = FakeValidator(mock.Mock()) always_lies = FakeValidator( mock.Mock(side_effect=p_ex.ImageValidationError("Oh no!"))) validator = cls([always_lies, always_tells_the_truth]) validator.validate(None, test_only=False) self.assertEqual(always_lies.mock_validate.call_count, 1) self.assertEqual(always_tells_the_truth.mock_validate.call_count, 1) def test_all_validator(self): cls = images.SaharaAllValidator # All pass always_tells_the_truth = mock.Mock() validator = cls([always_tells_the_truth, always_tells_the_truth]) validator.validate(None, test_only=False) self.assertEqual(always_tells_the_truth.validate.call_count, 2) always_tells_the_truth.validate.assert_called_with( None, test_only=False, image_arguments=None) # Second fails always_tells_the_truth = mock.Mock() always_lies = mock.Mock(validate=mock.Mock( side_effect=p_ex.ImageValidationError("Boom!"))) validator = cls([always_tells_the_truth, always_lies]) try: validator.validate(None, test_only=True) except p_ex.ImageValidationError: pass self.assertEqual(always_tells_the_truth.validate.call_count, 1) self.assertEqual(always_lies.validate.call_count, 1) always_tells_the_truth.validate.assert_called_with( None, test_only=True, image_arguments=None) always_lies.validate.assert_called_with( None, test_only=True, image_arguments=None) # First fails always_tells_the_truth = mock.Mock() always_lies = mock.Mock(validate=mock.Mock( side_effect=p_ex.ImageValidationError("Boom!"))) validator = cls([always_lies, always_tells_the_truth]) try: validator.validate(None, test_only=True, image_arguments={}) except p_ex.ImageValidationError: pass self.assertEqual(always_lies.validate.call_count, 1) always_lies.validate.assert_called_with( None, test_only=True, image_arguments={}) self.assertEqual(always_tells_the_truth.validate.call_count, 0) def test_os_case_validator(self): cls = images.SaharaOSCaseValidator Distro = images.SaharaOSCaseValidator._distro_tuple # First match wins and short circuits iteration centos = Distro("centos", mock.Mock()) redhat = Distro("redhat", mock.Mock()) distros = [centos, redhat] image_arguments = {images.SaharaImageValidator.DISTRO_KEY: "centos"} validator = cls(distros) validator.validate(None, test_only=False, image_arguments=image_arguments) self.assertEqual(centos.validator.validate.call_count, 1) self.assertEqual(redhat.validator.validate.call_count, 0) centos.validator.validate.assert_called_with( None, test_only=False, image_arguments=image_arguments) # Families match centos = Distro("centos", mock.Mock()) redhat = Distro("redhat", mock.Mock()) distros = [centos, redhat] image_arguments = {images.SaharaImageValidator.DISTRO_KEY: "fedora"} validator = cls(distros) validator.validate(None, test_only=False, image_arguments=image_arguments) self.assertEqual(centos.validator.validate.call_count, 0) self.assertEqual(redhat.validator.validate.call_count, 1) redhat.validator.validate.assert_called_with( None, test_only=False, image_arguments=image_arguments) # Non-matches do nothing centos = Distro("centos", mock.Mock()) redhat = Distro("redhat", mock.Mock()) distros = [centos, redhat] image_arguments = {images.SaharaImageValidator.DISTRO_KEY: "ubuntu"} validator = cls(distros) validator.validate(None, test_only=False, image_arguments=image_arguments) self.assertEqual(centos.validator.validate.call_count, 0) self.assertEqual(redhat.validator.validate.call_count, 0) def test_sahara_argument_case_validator(self): cls = images.SaharaArgumentCaseValidator # Match gets called image_arguments = {"argument": "value"} match = mock.Mock() nomatch = mock.Mock() cases = {"value": match, "another_value": nomatch} validator = cls("argument", cases) validator.validate(None, test_only=False, image_arguments=image_arguments) self.assertEqual(match.validate.call_count, 1) self.assertEqual(nomatch.validate.call_count, 0) match.validate.assert_called_with( None, test_only=False, image_arguments=image_arguments) # Non-matches do nothing image_arguments = {"argument": "value"} nomatch = mock.Mock() cases = {"some_value": nomatch, "another_value": nomatch} validator = cls("argument", cases) validator.validate(None, test_only=False, image_arguments=image_arguments) self.assertEqual(nomatch.validate.call_count, 0) def test_sahara_argument_set_validator(self): cls = images.SaharaArgumentSetterValidator # Old variable is overwritten image_arguments = {"argument": "value"} validator = cls("argument", "new_value") validator.validate(None, test_only=False, image_arguments=image_arguments) self.assertEqual(image_arguments["argument"], "new_value") # New variable is set image_arguments = {"argument": "value"} validator = cls("another_argument", "value") validator.validate(None, test_only=False, image_arguments=image_arguments) self.assertEqual(image_arguments, {"argument": "value", "another_argument": "value"}) def test_sahara_image_validator(self): cls = images.SaharaImageValidator sub_validator = mock.Mock(validate=mock.Mock()) remote = mock.Mock(get_os_distrib=mock.Mock( return_value="centos")) validator = cls(sub_validator, {}) validator.validate(remote, test_only=False, image_arguments={}) expected_map = {images.SaharaImageValidatorBase.DISTRO_KEY: "centos"} sub_validator.validate.assert_called_with( remote, test_only=False, image_arguments=expected_map) expected_map = {images.SaharaImageValidatorBase.DISTRO_KEY: "centos"} validator.validate(remote, test_only=True, image_arguments={}) sub_validator.validate.assert_called_with( remote, test_only=True, image_arguments=expected_map) sahara-12.0.0/sahara/tests/unit/plugins/test_provide_recommendations.py0000664000175000017500000002401613656752032026460 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import six from sahara import conductor as cond from sahara import context from sahara.plugins import recommendations_utils as ru from sahara.tests.unit import base as b conductor = cond.API class Configs(object): def __init__(self, configs): self.configs = configs def to_dict(self): return self.configs class FakeObject(object): def __init__(self, **kwargs): for attr in six.iterkeys(kwargs): setattr(self, attr, kwargs.get(attr)) class TestProvidingRecommendations(b.SaharaWithDbTestCase): @mock.patch('sahara.utils.openstack.nova.get_flavor') def test_get_recommended_node_configs_medium_flavor( self, fake_flavor): ng = FakeObject(flavor_id="fake_flavor", node_configs=Configs({})) cl = FakeObject(cluster_configs=Configs({})) fake_flavor.return_value = FakeObject(ram=4096, vcpus=2) observed = ru.HadoopAutoConfigsProvider( {}, [], cl, False)._get_recommended_node_configs(ng) self.assertEqual({ 'mapreduce.reduce.memory.mb': 768, 'mapreduce.map.java.opts': '-Xmx307m', 'mapreduce.map.memory.mb': 384, 'mapreduce.reduce.java.opts': '-Xmx614m', 'yarn.app.mapreduce.am.resource.mb': 384, 'yarn.app.mapreduce.am.command-opts': '-Xmx307m', 'mapreduce.task.io.sort.mb': 153, 'yarn.nodemanager.resource.memory-mb': 3072, 'yarn.scheduler.minimum-allocation-mb': 384, 'yarn.scheduler.maximum-allocation-mb': 3072, 'yarn.nodemanager.vmem-check-enabled': 'false' }, observed) @mock.patch('sahara.utils.openstack.nova.get_flavor') def test_get_recommended_node_configs_small_flavor( self, fake_flavor): ng = FakeObject(flavor_id="fake_flavor", node_configs=Configs({})) cl = FakeObject(cluster_configs=Configs({})) fake_flavor.return_value = FakeObject(ram=2048, vcpus=1) observed = ru.HadoopAutoConfigsProvider( {'node_configs': {}, 'cluster_configs': {}}, [], cl, False, )._get_recommended_node_configs(ng) self.assertEqual({ 'mapreduce.reduce.java.opts': '-Xmx409m', 'yarn.app.mapreduce.am.resource.mb': 256, 'mapreduce.reduce.memory.mb': 512, 'mapreduce.map.java.opts': '-Xmx204m', 'yarn.app.mapreduce.am.command-opts': '-Xmx204m', 'mapreduce.task.io.sort.mb': 102, 'mapreduce.map.memory.mb': 256, 'yarn.nodemanager.resource.memory-mb': 2048, 'yarn.scheduler.minimum-allocation-mb': 256, 'yarn.nodemanager.vmem-check-enabled': 'false', 'yarn.scheduler.maximum-allocation-mb': 2048, }, observed) def test_merge_configs(self): provider = ru.HadoopAutoConfigsProvider({}, None, None, False) initial_configs = { 'cat': { 'talk': 'meow', }, 'bond': { 'name': 'james' } } extra_configs = { 'dog': { 'talk': 'woof' }, 'bond': { 'extra_name': 'james bond' } } expected = { 'cat': { 'talk': 'meow', }, 'dog': { 'talk': 'woof' }, 'bond': { 'name': 'james', 'extra_name': 'james bond' } } self.assertEqual( expected, provider._merge_configs(initial_configs, extra_configs)) @mock.patch('sahara.utils.openstack.nova.get_flavor') @mock.patch('sahara.plugins.recommendations_utils.conductor.' 'node_group_update') @mock.patch('sahara.plugins.recommendations_utils.conductor.' 'cluster_update') def test_apply_recommended_configs(self, cond_cluster, cond_node_group, fake_flavor): class TestProvider(ru.HadoopAutoConfigsProvider): def get_datanode_name(self): return "dog_datanode" fake_flavor.return_value = FakeObject(ram=2048, vcpus=1) to_tune = { 'cluster_configs': { 'dfs.replication': ('dfs', 'replica') }, 'node_configs': { 'mapreduce.task.io.sort.mb': ('bond', 'extra_name') } } fake_plugin_configs = [ FakeObject(applicable_target='dfs', name='replica', default_value=3)] fake_ng = FakeObject( use_autoconfig=True, count=2, node_processes=['dog_datanode'], flavor_id='fake_id', node_configs=Configs({ 'bond': { 'name': 'james' } }) ) fake_cluster = FakeObject( cluster_configs=Configs({ 'cat': { 'talk': 'meow', } }), node_groups=[fake_ng], use_autoconfig=True, extra=Configs({}) ) v = TestProvider( to_tune, fake_plugin_configs, fake_cluster, False) v.apply_recommended_configs() self.assertEqual([mock.call(context.ctx(), fake_cluster, { 'cluster_configs': { 'cat': { 'talk': 'meow' }, 'dfs': { 'replica': 2 } } }), mock.call( context.ctx(), fake_cluster, {'extra': {'auto-configured': True}})], cond_cluster.call_args_list) self.assertEqual([mock.call(context.ctx(), fake_ng, { 'node_configs': { 'bond': { 'name': 'james', 'extra_name': 102 } } })], cond_node_group.call_args_list) @mock.patch('sahara.utils.openstack.nova.get_flavor') @mock.patch('sahara.plugins.recommendations_utils.conductor.' 'node_group_update') @mock.patch('sahara.plugins.recommendations_utils.conductor.' 'cluster_update') def test_apply_recommended_configs_no_updates( self, cond_cluster, cond_node_group, fake_flavor): fake_flavor.return_value = FakeObject(ram=2048, vcpus=1) to_tune = { 'cluster_configs': { 'dfs.replication': ('dfs', 'replica') }, 'node_configs': { 'mapreduce.task.io.sort.mb': ('bond', 'extra_name') } } fake_plugin_configs = [ FakeObject(applicable_target='dfs', name='replica', default_value=3)] fake_ng = FakeObject( use_autoconfig=True, count=2, node_processes=['dog_datanode'], flavor_id='fake_id', node_configs=Configs({ 'bond': { 'extra_name': 'james bond' } }) ) fake_cluster = FakeObject( cluster_configs=Configs({ 'dfs': { 'replica': 1 } }), node_groups=[fake_ng], use_autoconfig=True, extra=Configs({}) ) v = ru.HadoopAutoConfigsProvider( to_tune, fake_plugin_configs, fake_cluster, False) v.apply_recommended_configs() self.assertEqual(0, cond_node_group.call_count) self.assertEqual( [mock.call(context.ctx(), fake_cluster, {'extra': {'auto-configured': True}})], cond_cluster.call_args_list) def test_correct_use_autoconfig_value(self): ctx = context.ctx() ngt1 = conductor.node_group_template_create(ctx, { 'name': 'ngt1', 'flavor_id': '1', 'plugin_name': 'vanilla', 'hadoop_version': '1' }) ngt2 = conductor.node_group_template_create(ctx, { 'name': 'ngt2', 'flavor_id': '2', 'plugin_name': 'vanilla', 'hadoop_version': '1', 'use_autoconfig': False }) self.assertTrue(ngt1.use_autoconfig) self.assertFalse(ngt2.use_autoconfig) clt = conductor.cluster_template_create(ctx, { 'name': "clt1", 'plugin_name': 'vanilla', 'hadoop_version': '1', 'node_groups': [ { 'count': 3, "node_group_template_id": ngt1.id }, { 'count': 1, 'node_group_template_id': ngt2.id } ], 'use_autoconfig': False }) cluster = conductor.cluster_create(ctx, { 'name': 'stupid', 'cluster_template_id': clt.id }) self.assertFalse(cluster.use_autoconfig) for ng in cluster.node_groups: if ng.name == 'ngt1': self.assertTrue(ng.use_autoconfig) else: self.assertFalse(ng.use_autoconfig) @mock.patch('sahara.plugins.recommendations_utils.conductor.' 'cluster_update') def test_not_autonconfigured(self, cluster_update): fake_cluster = FakeObject(extra=Configs({})) v = ru.HadoopAutoConfigsProvider({}, [], fake_cluster, True) v.apply_recommended_configs() self.assertEqual(0, cluster_update.call_count) sahara-12.0.0/sahara/tests/unit/plugins/test_utils.py0000664000175000017500000001651413656752032022705 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import exceptions as ex from sahara.plugins import utils as pu from sahara.tests.unit import base as b class FakeInstance(object): def __init__(self, _id, node_processes=None): self.id = _id self.node_processes = node_processes or [] @property def node_group(self): return self def __eq__(self, other): return self.id == other.id class FakeNodeGroup(object): def __init__(self, node_processes, instances=None): self.node_processes = node_processes self.instances = instances or [] self.count = len(self.instances) def __eq__(self, other): return self.node_processes == other.node_processes class TestPluginUtils(b.SaharaTestCase): def setUp(self): super(TestPluginUtils, self).setUp() self.cluster = mock.Mock() self.cluster.node_groups = [ FakeNodeGroup(["node_process1"], [FakeInstance("1")]), FakeNodeGroup(["node_process2"], [FakeInstance("2")]), FakeNodeGroup(["node_process3"], [FakeInstance("3")]), ] def test_get_node_groups(self): res = pu.get_node_groups(self.cluster) self.assertEqual([ FakeNodeGroup(["node_process1"]), FakeNodeGroup(["node_process2"]), FakeNodeGroup(["node_process3"]), ], res) res = pu.get_node_groups(self.cluster, "node_process1") self.assertEqual([ FakeNodeGroup(["node_process1"]) ], res) res = pu.get_node_groups(self.cluster, "node_process") self.assertEqual([], res) def test_get_instances_count(self): res = pu.get_instances_count(self.cluster) self.assertEqual(3, res) res = pu.get_instances_count(self.cluster, "node_process1") self.assertEqual(1, res) def test_get_instances(self): res = pu.get_instances(self.cluster) self.assertEqual([ FakeInstance("1"), FakeInstance("2"), FakeInstance("3")], res) res = pu.get_instances(self.cluster, "node_process1") self.assertEqual([FakeInstance("1")], res) def test_get_instance(self): self.assertRaises(ex.InvalidComponentCountException, pu.get_instance, self.cluster, None) res = pu.get_instance(self.cluster, "node_process") self.assertIsNone(res) res = pu.get_instance(self.cluster, "node_process1") self.assertEqual(FakeInstance("1"), res) def test_generate_host_names(self): node = mock.Mock() node.hostname = mock.Mock(return_value="host_name") res = pu.generate_host_names([node, node]) self.assertEqual("host_name\nhost_name", res) def test_generate_fqdn_host_names(self): node = mock.Mock() node.fqdn = mock.Mock(return_value="fqdn") res = pu.generate_fqdn_host_names([node, node]) self.assertEqual("fqdn\nfqdn", res) def test_get_port_from_address(self): res = pu.get_port_from_address("0.0.0.0:8000") self.assertEqual(8000, res) res = pu.get_port_from_address("http://localhost:8000/resource") self.assertEqual(8000, res) res = pu.get_port_from_address("http://192.168.1.101:10000") self.assertEqual(10000, res) res = pu.get_port_from_address("mydomain") self.assertIsNone(res) def test_instances_with_services(self): inst = [FakeInstance("1", ["nodeprocess1"]), FakeInstance("2", ["nodeprocess2"])] node_processes = ["nodeprocess"] res = pu.instances_with_services(inst, node_processes) self.assertEqual([], res) node_processes = ["nodeprocess1"] res = pu.instances_with_services(inst, node_processes) self.assertEqual([FakeInstance("1", ["nodeprocess1"])], res) @mock.patch("sahara.plugins.utils.plugins_base") def test_get_config_value_or_default(self, mock_plugins_base): # no config self.assertRaises(RuntimeError, pu.get_config_value_or_default) config = mock.Mock() config.applicable_target = "service" config.name = "name" config.default_value = "default_value" # cluster has the config cluster = mock.Mock() cluster.cluster_configs = {"service": {"name": "name"}} cluster.plugin_name = "plugin_name" cluster.hadoop_version = "hadoop_version" res = pu.get_config_value_or_default(cluster=cluster, config=config) self.assertEqual("name", res) # node group has the config cluster.cluster_configs = {} node_group1 = mock.Mock() node_group2 = mock.Mock() node_group1.configuration = mock.Mock(return_value={"service": {}}) node_group2.configuration = mock.Mock( return_value={"service": {"name": "name"}}) cluster.node_groups = [node_group1, node_group2] res = pu.get_config_value_or_default(cluster=cluster, config=config) self.assertEqual("name", res) # cluster doesn't have the config, neither the node groups # so it returns the default value cluster.node_groups = [] res = pu.get_config_value_or_default(cluster=cluster, config=config) self.assertEqual("default_value", res) # no config specified, but there's a config for the plugin # with this service and name mock_get_all_configs = mock.Mock(return_value=[config]) mock_plugin = mock.Mock() mock_plugin.get_all_configs = mock_get_all_configs mock_get_plugin = mock.Mock(return_value=mock_plugin) mock_PLUGINS = mock.Mock() mock_PLUGINS.get_plugin = mock_get_plugin mock_plugins_base.PLUGINS = mock_PLUGINS res = pu.get_config_value_or_default(cluster=cluster, service="service", name="name") self.assertEqual("default_value", res) mock_get_plugin.assert_called_once_with("plugin_name") mock_get_all_configs.assert_called_once_with("hadoop_version") # no config especified and no existing config for this plugin # with this service or name cluster.plugin_name = "plugin_name2" cluster.hadoop_version = "hadoop_version2" self.assertRaises(RuntimeError, pu.get_config_value_or_default, cluster=cluster, service="newService", name="name") mock_get_plugin.assert_called_with("plugin_name2") self.assertEqual(2, mock_get_plugin.call_count) mock_get_all_configs.assert_called_with("hadoop_version2") self.assertEqual(2, mock_get_all_configs.call_count) sahara-12.0.0/sahara/tests/unit/plugins/test_provisioning.py0000664000175000017500000001004413656752032024263 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from sahara import conductor as cond from sahara import context from sahara import exceptions as ex from sahara.plugins import provisioning as p from sahara.tests.unit import base conductor = cond.API class ProvisioningPluginBaseTest(testtools.TestCase): def test__map_to_user_inputs_success(self): c1, c2, c3, plugin = _build_configs_and_plugin() user_inputs = plugin._map_to_user_inputs(None, { 'at-1': { 'n-1': 'v-1', 'n-3': 'v-3', }, 'at-2': { 'n-2': 'v-2', }, }) self.assertEqual([ p.UserInput(c1, 'v-1'), p.UserInput(c2, 'v-2'), p.UserInput(c3, 'v-3'), ], user_inputs) def test__map_to_user_inputs_failure(self): c1, c2, c3, plugin = _build_configs_and_plugin() with testtools.ExpectedException(ex.ConfigurationError): plugin._map_to_user_inputs(None, { 'at-X': { 'n-1': 'v-1', }, }) with testtools.ExpectedException(ex.ConfigurationError): plugin._map_to_user_inputs(None, { 'at-1': { 'n-X': 'v-1', }, }) def _build_configs_and_plugin(): c1 = p.Config('n-1', 'at-1', 'cluster') c2 = p.Config('n-2', 'at-2', 'cluster') c3 = p.Config('n-3', 'at-1', 'node') class TestPlugin(TestEmptyPlugin): def get_configs(self, hadoop_version): return [c1, c2, c3] return c1, c2, c3, TestPlugin() class TestEmptyPlugin(p.ProvisioningPluginBase): def get_title(self): pass def get_versions(self): pass def get_configs(self, hadoop_version): pass def get_node_processes(self, hadoop_version): pass def configure_cluster(self, cluster): pass def start_cluster(self, cluster): pass class TestPluginDataCRUD(base.SaharaWithDbTestCase): def test_crud(self): ctx = context.ctx() data = conductor.plugin_create( ctx, {'name': 'fake', 'plugin_labels': {'enabled': True}}) self.assertIsNotNone(data) raised = None try: # duplicate entry, shouldn't work conductor.plugin_create(ctx, {'name': 'fake'}) except Exception as e: raised = e self.assertIsNotNone(raised) # not duplicated entry, other tenant ctx.tenant = "tenant_2" res = conductor.plugin_create(ctx, {'name': 'fake'}) conductor.plugin_create(ctx, {'name': 'guy'}) self.assertIsNotNone(res) self.assertEqual(2, len(conductor.plugin_get_all(ctx))) ctx.tenant = "tenant_1" data = conductor.plugin_get(ctx, 'fake') self.assertEqual('fake', data['name']) data = conductor.plugin_update( ctx, 'fake', {'version_labels': {'0.1': {'enabled': False}}}) data = conductor.plugin_get(ctx, 'fake') self.assertEqual( {'0.1': {'enabled': False}}, data.get('version_labels')) with testtools.ExpectedException(ex.NotFoundException): conductor.plugin_update(ctx, 'fake_not_found', {}) data = conductor.plugin_remove(ctx, 'fake') self.assertIsNone(data) data = conductor.plugin_get(ctx, 'fake') self.assertIsNone(data) with testtools.ExpectedException(ex.NotFoundException): conductor.plugin_remove(ctx, 'fake') sahara-12.0.0/sahara/tests/unit/plugins/test_labels.py0000664000175000017500000002254313656752032023006 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import jsonschema.exceptions as json_exc import testtools from unittest import mock from sahara import conductor as cond from sahara import context from sahara import exceptions as ex from sahara.plugins import base from sahara.tests.unit import base as unit_base from sahara.utils import api_validator conductor = cond.API EXPECTED_SCHEMA = { "type": "object", "additionalProperties": False, "properties": { "plugin_labels": { "type": "object", "additionalProperties": False, "properties": { "hidden": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } }, "stable": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } }, "enabled": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } }, "deprecated": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } } } }, "version_labels": { "type": "object", "additionalProperties": False, "properties": { "0.1": { "type": "object", "additionalProperties": False, "properties": { "hidden": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } }, "stable": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } }, "enabled": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } }, "deprecated": { "type": "object", "additionalProperties": False, "properties": { "status": { "type": "boolean" } } } } } } }, } } class TestPluginLabels(unit_base.SaharaWithDbTestCase): def test_validate_default_labels_load(self): self.override_config('plugins', 'fake') manager = base.PluginManager() for plugin in ['fake']: data = manager.label_handler.get_label_details(plugin) self.assertIsNotNone(data) # order doesn't play a role self.assertIsNotNone(data['plugin_labels']) self.assertEqual( sorted(list(manager.get_plugin(plugin).get_versions())), sorted(list(data.get('version_labels').keys()))) def test_get_label_full_details(self): self.override_config('plugins', ['fake']) lh = base.PluginManager().label_handler result = lh.get_label_full_details('fake') self.assertIsNotNone(result.get('plugin_labels')) self.assertIsNotNone(result.get('version_labels')) pl = result.get('plugin_labels') self.assertEqual( ['enabled', 'hidden'], sorted(list(pl.keys())) ) for lb in ['hidden', 'enabled']: self.assertEqual( ['description', 'mutable', 'status'], sorted(list(pl[lb])) ) vl = result.get('version_labels') self.assertEqual(['0.1'], list(vl.keys())) vl = vl.get('0.1') self.assertEqual( ['enabled'], list(vl.keys())) self.assertEqual( ['description', 'mutable', 'status'], sorted(list(vl['enabled'])) ) def test_validate_plugin_update(self): def validate(plugin_name, values, validator, lh): validator.validate(values) lh.validate_plugin_update(plugin_name, values) values = {'plugin_labels': {'enabled': {'status': False}}} self.override_config('plugins', ['fake']) lh = base.PluginManager() validator = api_validator.ApiValidator( lh.get_plugin_update_validation_jsonschema()) validate('fake', values, validator, lh) values = {'plugin_labels': {'not_exists': {'status': False}}} with testtools.ExpectedException(json_exc.ValidationError): validate('fake', values, validator, lh) values = {'plugin_labels': {'enabled': {'status': 'False'}}} with testtools.ExpectedException(json_exc.ValidationError): validate('fake', values, validator, lh) values = {'field': {'blala': 'blalalalalala'}} with testtools.ExpectedException(json_exc.ValidationError): validate('fake', values, validator, lh) values = {'version_labels': {'0.1': {'enabled': {'status': False}}}} validate('fake', values, validator, lh) values = {'version_labels': {'0.1': {'hidden': {'status': True}}}} with testtools.ExpectedException(ex.InvalidDataException): validate('fake', values, validator, lh) def test_jsonschema(self): self.override_config('plugins', ['fake']) lh = base.PluginManager() schema = lh.get_plugin_update_validation_jsonschema() self.assertEqual(EXPECTED_SCHEMA, schema) def test_update(self): self.override_config('plugins', ['fake']) lh = base.PluginManager() data = lh.update_plugin('fake', values={ 'plugin_labels': {'enabled': {'status': False}}}).dict # enabled is updated, but hidden still same self.assertFalse(data['plugin_labels']['enabled']['status']) self.assertTrue(data['plugin_labels']['hidden']['status']) data = lh.update_plugin('fake', values={ 'version_labels': {'0.1': {'enabled': {'status': False}}}}).dict self.assertFalse(data['plugin_labels']['enabled']['status']) self.assertTrue(data['plugin_labels']['hidden']['status']) self.assertFalse(data['version_labels']['0.1']['enabled']['status']) @mock.patch('sahara.plugins.labels.LOG.warning') def test_validate_plugin_labels(self, logger): self.override_config('plugins', ['fake']) lh = base.PluginManager() lh.validate_plugin_labels('fake', '0.1') self.assertEqual(0, logger.call_count) dct = { 'name': 'fake', 'version_labels': { '0.1': { 'deprecated': {'status': True}, 'enabled': {'status': True} } }, 'plugin_labels': { 'deprecated': {'status': True}, 'enabled': {'status': True} } } conductor.plugin_create(context.ctx(), dct) lh.validate_plugin_labels('fake', '0.1') self.assertEqual(2, logger.call_count) conductor.plugin_remove(context.ctx(), 'fake') dct['plugin_labels']['enabled']['status'] = False conductor.plugin_create(context.ctx(), dct) with testtools.ExpectedException(ex.InvalidReferenceException): lh.validate_plugin_labels('fake', '0.1') conductor.plugin_remove(context.ctx(), 'fake') dct['plugin_labels']['enabled']['status'] = True dct['version_labels']['0.1']['enabled']['status'] = False conductor.plugin_create(context.ctx(), dct) with testtools.ExpectedException(ex.InvalidReferenceException): lh.validate_plugin_labels('fake', '0.1') sahara-12.0.0/sahara/tests/unit/plugins/test_kerberos.py0000664000175000017500000001151713656752032023357 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara import context from sahara.plugins import kerberos as krb from sahara.tests.unit import base ADD_PRINCIPAL_SCRIPT = """#!/bin/bash mkdir -p /tmp/sahara-kerberos/ kadmin -p sahara/admin <= 1 wrap_it(data={"cluster_id": uuidutils.generate_uuid(), "job_configs": { "configs": { "edp.java.main_class": "org.me.class"}}}) @mock.patch('sahara.conductor.api.LocalApi.cluster_get') @mock.patch('sahara.conductor.api.LocalApi.job_get') def test_edp_main_class_java(self, job_get, cluster_get): job_get.return_value = mock.Mock(type=edp.JOB_TYPE_JAVA, interface=[]) ng = tu.make_ng_dict('master', 42, ['namenode', 'oozie'], 1, instances=[tu.make_inst_dict('id', 'name')]) cluster_get.return_value = tu.create_cluster("cluster", "tenant1", "fake", "0.1", [ng]) self._assert_create_object_validation( data={ "cluster_id": uuidutils.generate_uuid(), "job_configs": {"configs": {}, "params": {}, "args": [], "job_execution_info": {}} }, bad_req_i=(1, "INVALID_DATA", "%s job must " "specify edp.java.main_class" % edp.JOB_TYPE_JAVA)) self._assert_create_object_validation( data={ "cluster_id": uuidutils.generate_uuid(), "job_configs": { "configs": { "edp.java.main_class": ""}, "params": {}, "args": [], "job_execution_info": {}} }, bad_req_i=(1, "INVALID_DATA", "%s job must " "specify edp.java.main_class" % edp.JOB_TYPE_JAVA)) self._assert_create_object_validation( data={ "cluster_id": uuidutils.generate_uuid(), "job_configs": { "configs": { "edp.java.main_class": "org.me.myclass"}, "params": {}, "job_execution_info": {}, "args": []} }) @mock.patch('sahara.conductor.api.LocalApi.cluster_get') @mock.patch('sahara.conductor.api.LocalApi.job_get') def test_edp_main_class_spark(self, job_get, cluster_get): job_get.return_value = mock.Mock(type=edp.JOB_TYPE_SPARK, interface=[]) ng = tu.make_ng_dict('master', 42, ['namenode'], 1, instances=[tu.make_inst_dict('id', 'name')]) cluster_get.return_value = tu.create_cluster("cluster", "tenant1", "fake", "0.1", [ng]) self._assert_create_object_validation( data={ "cluster_id": uuidutils.generate_uuid(), "job_configs": {"configs": {}, "params": {}, "args": [], "job_execution_info": {}} }, bad_req_i=(1, "INVALID_DATA", "%s job must " "specify edp.java.main_class" % edp.JOB_TYPE_SPARK)) self._assert_create_object_validation( data={ "cluster_id": uuidutils.generate_uuid(), "job_configs": { "configs": { "edp.java.main_class": ""}, "params": {}, "args": [], "job_execution_info": {}} }, bad_req_i=(1, "INVALID_DATA", "%s job must " "specify edp.java.main_class" % edp.JOB_TYPE_SPARK)) self._assert_create_object_validation( data={ "cluster_id": uuidutils.generate_uuid(), "job_configs": { "configs": { "edp.java.main_class": "org.me.myclass"}, "params": {}, "job_execution_info": {}, "args": []} }) @mock.patch('oslo_utils.timeutils.utcnow') def test_invalid_start_time_in_job_execution_info(self, now_get): configs = {"start": "2015-07-21 14:32:52"} now = time.strptime("2015-07-22 14:39:14", "%Y-%m-%d %H:%M:%S") now = timeutils.datetime.datetime.fromtimestamp(time.mktime(now)) now_get.return_value = now with testtools.ExpectedException(ex.InvalidJobExecutionInfoException): je.check_scheduled_job_execution_info(configs) class TestJobExecUpdateValidation(u.ValidationTestCase): def setUp(self): super(TestJobExecUpdateValidation, self).setUp() self._create_object_fun = mock.Mock() self.scheme = je_schema.JOB_EXEC_UPDATE_SCHEMA def test_job_execution_update_types(self): data = { 'is_public': False, 'is_protected': False, 'info': { 'status': 'suspend' } } self._assert_types(data) def test_job_execution_update_nothing_required(self): self._assert_create_object_validation( data={ 'is_public': False, 'is_protected': False, 'info': { 'status': 'suspend' } } ) @mock.patch('sahara.conductor.api.LocalApi.job_execution_get') def test_je_update_when_protected(self, get_je_p): job_exec = mock.Mock(id='123', tenant_id='tenant_1', is_protected=True) get_je_p.return_value = job_exec # job execution can't be updated if it's marked as protected with testtools.ExpectedException(ex.UpdateFailedException): try: je.check_job_execution_update(job_exec, {'job_configs': {}}) except ex.UpdateFailedException as e: self.assert_protected_resource_exception(e) raise e # job execution can be updated because is_protected flag was # set to False je.check_job_execution_update( job_exec, {'is_protected': False, 'job_configs': {}}) @mock.patch('sahara.conductor.api.LocalApi.job_execution_get') def test_public_je_cancel_delete_from_another_tenant(self, get_je_p): job_exec = mock.Mock(id='123', tenant_id='tenant2', is_protected=False, is_public=True) get_je_p.return_value = job_exec with testtools.ExpectedException(ex.UpdateFailedException): try: je.check_job_execution_update( job_exec, data={'is_public': False}) except ex.UpdateFailedException as e: self.assert_created_in_another_tenant_exception(e) raise e class TestJobExecutionCancelDeleteValidation(u.ValidationTestCase): def setUp(self): super(TestJobExecutionCancelDeleteValidation, self).setUp() self.setup_context(tenant_id='tenant1') @mock.patch('sahara.conductor.api.LocalApi.job_execution_get') def test_je_cancel_delete_when_protected(self, get_je_p): job_exec = mock.Mock(id='123', tenant_id='tenant1', is_protected=True) get_je_p.return_value = job_exec with testtools.ExpectedException(ex.CancelingFailed): try: je.check_job_execution_cancel(job_exec) except ex.CancelingFailed as e: self.assert_protected_resource_exception(e) raise e with testtools.ExpectedException(ex.DeletionFailed): try: je.check_job_execution_delete(job_exec) except ex.DeletionFailed as e: self.assert_protected_resource_exception(e) raise e @mock.patch('sahara.conductor.api.LocalApi.job_execution_get') def test_public_je_cancel_delete_from_another_tenant(self, get_je_p): job_exec = mock.Mock(id='123', tenant_id='tenant2', is_protected=False, is_public=True) get_je_p.return_value = job_exec with testtools.ExpectedException(ex.CancelingFailed): try: je.check_job_execution_cancel(job_exec) except ex.CancelingFailed as e: self.assert_created_in_another_tenant_exception(e) raise e with testtools.ExpectedException(ex.DeletionFailed): try: je.check_job_execution_delete(job_exec) except ex.DeletionFailed as e: self.assert_created_in_another_tenant_exception(e) raise e sahara-12.0.0/sahara/tests/unit/service/validation/edp/test_job_binary.py0000664000175000017500000001040713656752032026537 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service.api import v10 as api from sahara.service.validations.edp import job_binary as b from sahara.service.validations.edp.job_binary import jb_manager from sahara.service.validations.edp import job_binary_schema as b_s from sahara.swift import utils as su from sahara.tests.unit.service.validation import utils as u class TestJobBinaryValidation(u.ValidationTestCase): def setUp(self): super(TestJobBinaryValidation, self).setUp() self._create_object_fun = b.check_job_binary self.scheme = b_s.JOB_BINARY_SCHEMA api.plugin_base.setup_plugins() jb_manager.setup_job_binaries() @mock.patch('sahara.utils.api_validator.jb_manager') def test_creation(self, mock_jb_manager): JOB_BINARIES = mock.Mock() mock_jb = mock.Mock() mock_jb_manager.JOB_BINARIES = JOB_BINARIES JOB_BINARIES.get_job_binary_by_url = mock.Mock(return_value=mock_jb) mock_jb.validate_job_location_format = mock.Mock(return_value=True) data = { "name": "main.jar", "url": "internal-db://3e4651a5-1f08-4880-94c4-596372b37c64", "extra": { "user": "user", "password": "password" }, "description": "long description" } self._assert_types(data) @mock.patch('sahara.utils.api_validator.jb_manager') def test_job_binary_create_swift(self, mock_jb_manager): JOB_BINARIES = mock.Mock() mock_jb = mock.Mock() mock_jb_manager.JOB_BINARIES = JOB_BINARIES JOB_BINARIES.get_job_binary_by_url = mock.Mock(return_value=mock_jb) mock_jb.validate_job_location_format = mock.Mock(return_value=True) self._assert_create_object_validation( data={ "name": "j_o_w", "url": su.SWIFT_INTERNAL_PREFIX + "o.sahara/k" }, bad_req_i=(1, "BAD_JOB_BINARY", "To work with JobBinary located in internal " "swift add 'user' and 'password' to extra")) self.override_config('use_domain_for_proxy_users', True) self._assert_create_object_validation( data={ "name": "j_o_w", "url": su.SWIFT_INTERNAL_PREFIX + "o.sahara/k" }) @mock.patch('sahara.utils.api_validator.jb_manager') def test_job_binary_create_internal(self, mock_jb_manager): JOB_BINARIES = mock.Mock() mock_jb = mock.Mock() mock_jb_manager.JOB_BINARIES = JOB_BINARIES JOB_BINARIES.get_job_binary_by_url = mock.Mock(return_value=mock_jb) mock_jb.validate_job_location_format = mock.Mock(return_value=False) self._assert_create_object_validation( data={ "name": "main.jar", "url": "internal-db://abacaba", }, bad_req_i=(1, "VALIDATION_ERROR", "url: 'internal-db://abacaba' is not a " "'valid_job_location'")) @mock.patch('sahara.utils.api_validator.jb_manager') def test_job_binary_create_manila(self, mock_jb_manager): JOB_BINARIES = mock.Mock() mock_jb = mock.Mock() mock_jb_manager.JOB_BINARIES = JOB_BINARIES JOB_BINARIES.get_job_binary_by_url = mock.Mock(return_value=mock_jb) mock_jb.validate_job_location_format = mock.Mock(return_value=False) self._assert_create_object_validation( data={ "name": "main.jar", "url": "manila://abacaba", }, bad_req_i=(1, "VALIDATION_ERROR", "url: 'manila://abacaba' is not a " "'valid_job_location'")) sahara-12.0.0/sahara/tests/unit/service/validation/test_cluster_delete_validation.py0000664000175000017500000000627613656752032031077 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock from sahara import exceptions as ex from sahara.service import validation as v from sahara.service.validations import clusters as c_val from sahara.service.validations import clusters_schema as c_schema from sahara.tests.unit.service.validation import utils as u from sahara.tests.unit import testutils as tu class TestClusterDeleteValidation(u.ValidationTestCase): def setUp(self): super(TestClusterDeleteValidation, self).setUp() self.setup_context(tenant_id='tenant1') @mock.patch('sahara.service.api.v10.get_cluster') def test_cluster_delete_when_protected(self, get_cluster_p): cluster = tu.create_cluster("cluster1", "tenant1", "fake", "0.1", ['ng1'], is_protected=True) get_cluster_p.return_value = cluster with testtools.ExpectedException(ex.DeletionFailed): try: c_val.check_cluster_delete(cluster.id) except ex.DeletionFailed as e: self.assert_protected_resource_exception(e) raise e @mock.patch('sahara.service.api.v10.get_cluster') def test_public_cluster_delete_from_another_tenant(self, get_cluster_p): cluster = tu.create_cluster("cluster1", "tenant2", "fake", "0.1", ['ng1'], is_public=True) get_cluster_p.return_value = cluster with testtools.ExpectedException(ex.DeletionFailed): try: c_val.check_cluster_delete(cluster.id) except ex.DeletionFailed as e: self.assert_created_in_another_tenant_exception(e) raise e class TestClusterDeleteValidationV2(testtools.TestCase): @mock.patch("sahara.utils.api.request_data") @mock.patch("sahara.utils.api.bad_request") def _validate_body(self, request, br, rd): m_func = mock.Mock() m_func.__name__ = "m_func" rd.return_value = request validator = v.validate(c_schema.CLUSTER_DELETE_SCHEMA_V2, m_func) validator(m_func)(data=request) return not br.call_count def test_delete_schema_empty_body(self): request = {} self.assertTrue(self._validate_body(request)) def test_delete_schema_wrong_type(self): request = {"force": "True"} self.assertFalse(self._validate_body(request)) def test_delete_schema_extra_fields(self): request = {"force": True, "just_kidding": False} self.assertFalse(self._validate_body(request)) def test_delete_schema_good(self): request = {"force": True} self.assertTrue(self._validate_body(request)) sahara-12.0.0/sahara/tests/unit/service/validation/test_cluster_update_validation.py0000664000175000017500000001324313656752032031107 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock from sahara import exceptions as ex from sahara.service.api import v10 as api from sahara.service.health import verification_base from sahara.service.validations import clusters as c_val from sahara.service.validations import clusters_schema as c_schema from sahara.tests.unit.service.validation import utils as u from sahara.tests.unit import testutils as tu class TestClusterUpdateValidation(u.ValidationTestCase): def setUp(self): super(TestClusterUpdateValidation, self).setUp() self._create_object_fun = mock.Mock() self.scheme = c_schema.CLUSTER_UPDATE_SCHEMA api.plugin_base.setup_plugins() def test_cluster_update_types(self): self._assert_types({ 'name': 'cluster', 'description': 'very big cluster', 'is_public': False, 'is_protected': False, 'shares': [] }) def test_cluster_update_nothing_required(self): self._assert_create_object_validation( data={} ) def test_cluster_update(self): self._assert_create_object_validation( data={ 'name': 'cluster', 'description': 'very big cluster', 'is_public': False, 'is_protected': False, 'shares': [] } ) self._assert_create_object_validation( data={ 'name': 'cluster', 'id': '1' }, bad_req_i=(1, "VALIDATION_ERROR", "Additional properties are not allowed " "('id' was unexpected)") ) @mock.patch('sahara.service.api.v10.get_cluster') def test_cluster_update_when_protected(self, get_cluster_p): cluster = tu.create_cluster("cluster1", "tenant_1", "fake", "0.1", ['ng1'], is_protected=True) get_cluster_p.return_value = cluster # cluster can't be updated if it's marked as protected with testtools.ExpectedException(ex.UpdateFailedException): try: c_val.check_cluster_update(cluster.id, {'name': 'new'}) except ex.UpdateFailedException as e: self.assert_protected_resource_exception(e) raise e # cluster can be updated because is_protected flag was set to False c_val.check_cluster_update( cluster.id, {'is_protected': False, 'name': 'new'}) @mock.patch('sahara.service.api.v10.get_cluster') def test_public_cluster_update_from_another_tenant(self, get_cluster_p): cluster = tu.create_cluster("cluster1", "tenant_2", "fake", "0.1", ['ng1'], is_public=True) get_cluster_p.return_value = cluster # cluster can't be updated from another tenant with testtools.ExpectedException(ex.UpdateFailedException): try: c_val.check_cluster_update(cluster.id, {'name': 'new'}) except ex.UpdateFailedException as e: self.assert_created_in_another_tenant_exception(e) raise e @mock.patch('sahara.conductor.API.cluster_get') def test_verifications_ops(self, get_cluster_mock): cluster = tu.create_cluster( 'cluster1', "tenant_1", "fake", "0.1", ['ng1'], status='Active') get_cluster_mock.return_value = cluster self.assertIsNone(c_val.check_cluster_update( cluster, {'verification': {'status': "START"}})) cluster = tu.create_cluster( 'cluster1', "tenant_1", "fake", "0.1", ['ng1'], status='Active', verification={'status': "CHECKING"}) get_cluster_mock.return_value = cluster with testtools.ExpectedException(verification_base.CannotVerifyError): c_val.check_cluster_update( cluster, {'verification': {'status': 'START'}}) cluster = tu.create_cluster( 'cluster1', "tenant_1", "fake", "0.1", ['ng1'], status='Active', verification={'status': "RED"}) get_cluster_mock.return_value = cluster self.assertIsNone(c_val.check_cluster_update( cluster, {'verification': {'status': "START"}})) with testtools.ExpectedException(verification_base.CannotVerifyError): c_val.check_cluster_update(cluster, { 'is_public': True, 'verification': {'status': "START"}}) # allow verification for protected resource cluster = tu.create_cluster( 'cluster1', "tenant_1", "fake", "0.1", ['ng1'], is_protected=True, status='Active') get_cluster_mock.return_value = cluster self.assertIsNone(c_val.check_cluster_update( cluster, {'verification': {'status': "START"}})) # just for sure that protected works nicely for other with testtools.ExpectedException(ex.UpdateFailedException): try: c_val.check_cluster_update(cluster.id, {'name': 'new'}) except ex.UpdateFailedException as e: self.assert_protected_resource_exception(e) raise e sahara-12.0.0/sahara/tests/unit/service/validation/test_cluster_template_update_validation.py0000664000175000017500000000433213656752032033001 0ustar zuulzuul00000000000000# Copyright (c) 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from unittest import mock from sahara.service.api import v10 as api from sahara.service.validations import cluster_template_schema as ct_schema from sahara.tests.unit.service.validation import utils as u SAMPLE_DATA = { 'name': 'testname', 'plugin_name': 'fake', 'hadoop_version': '0.1', 'is_public': False, 'is_protected': False } class TestClusterTemplateUpdateValidation(u.ValidationTestCase): def setUp(self): super(TestClusterTemplateUpdateValidation, self).setUp() self._create_object_fun = mock.Mock() self.scheme = ct_schema.CLUSTER_TEMPLATE_UPDATE_SCHEMA api.plugin_base.setup_plugins() def test_cluster_template_update_nothing_required(self): self._assert_create_object_validation( data={} ) def test_cluster_template_update_schema(self): create = copy.copy(ct_schema.CLUSTER_TEMPLATE_SCHEMA) update = copy.copy(ct_schema.CLUSTER_TEMPLATE_UPDATE_SCHEMA) # No required items for update self.assertEqual([], update["required"]) # Other than required, schemas are equal del update["required"] del create["required"] self.assertEqual(create, update) def test_cluster_template_update(self): self._assert_create_object_validation( data=SAMPLE_DATA ) extra = copy.copy(SAMPLE_DATA) extra['dog'] = 'fido' self._assert_create_object_validation( data=extra, bad_req_i=(1, "VALIDATION_ERROR", "Additional properties are not allowed " "('dog' was unexpected)") ) sahara-12.0.0/sahara/tests/unit/service/validation/test_share_validations.py0000664000175000017500000001123313656752032027346 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. try: from manilaclient.common.apiclient import exceptions as manila_ex except ImportError: from manilaclient.openstack.common.apiclient import exceptions as manila_ex from unittest import mock from sahara.service.validations import shares from sahara.tests.unit.service.validation import utils as u class TestShareValidations(u.ValidationTestCase): def setUp(self): super(TestShareValidations, self).setUp() self._create_object_fun = shares.check_shares self.scheme = shares.SHARE_SCHEMA @mock.patch('sahara.utils.openstack.manila.client') def test_shares(self, f_client): f_client.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock( return_value=mock.Mock(share_proto='NFS')))) self._assert_create_object_validation(data=[ { "id": "12345678-1234-1234-1234-123456789012", "path": "/path", "access_level": 'rw' }]) @mock.patch('sahara.utils.openstack.manila.client') def test_shares_bad_type(self, f_client): f_client.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock( return_value=mock.Mock(share_proto='Mackerel')))) self._assert_create_object_validation( data=[ { "id": "12345678-1234-1234-1234-123456789012", "path": "/path", "access_level": 'rw' }], bad_req_i=(1, 'INVALID_REFERENCE', "Requested share id " "12345678-1234-1234-1234-123456789012 is of type " "Mackerel, which is not supported by Sahara.")) @mock.patch('sahara.utils.openstack.manila.client') def test_shares_overlapping_paths(self, f_client): self._assert_create_object_validation( data=[ { "id": "12345678-1234-1234-1234-123456789012", "path": "/path", }, { "id": "DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF", "path": "/path" }], bad_req_i=(1, 'INVALID_DATA', "Multiple shares cannot be mounted to the same path.")) self.assertEqual(0, f_client.call_count) @mock.patch('sahara.utils.openstack.manila.client') def test_shares_no_share_exists(self, f_client): f_client.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock( side_effect=manila_ex.NotFound))) self._assert_create_object_validation( data=[ { "id": "12345678-1234-1234-1234-123456789012", "path": "/path" }], bad_req_i=( 1, 'INVALID_REFERENCE', "Requested share id 12345678-1234-1234-1234-123456789012 does " "not exist.")) @mock.patch('sahara.utils.openstack.manila.client') def test_shares_bad_paths(self, f_client): self._assert_create_object_validation( data=[ { "id": "12345678-1234-1234-1234-123456789012", "path": "path" }], bad_req_i=( 1, 'INVALID_DATA', 'Paths must be absolute Linux paths starting with "/" ' 'and may not contain nulls.')) self._assert_create_object_validation( data=[ { "id": "12345678-1234-1234-1234-123456789012", "path": "\x00" }], bad_req_i=( 1, 'INVALID_DATA', 'Paths must be absolute Linux paths starting with "/" ' 'and may not contain nulls.')) self.assertEqual(0, f_client.call_count) @mock.patch('sahara.utils.openstack.manila.client') def test_shares_no_shares(self, f_client): self._assert_create_object_validation(data=[]) self.assertEqual(0, f_client.call_count) sahara-12.0.0/sahara/tests/unit/service/validation/test_cluster_create_validation.py0000664000175000017500000006074013656752032031074 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import six import testtools from sahara import exceptions from sahara.service.api import v10 as api from sahara.service.validations import clusters as c from sahara.service.validations import clusters_schema as c_schema from sahara.tests.unit import base from sahara.tests.unit.service.validation import utils as u class TestClusterCreateValidation(u.ValidationTestCase): def setUp(self): super(TestClusterCreateValidation, self).setUp() self._create_object_fun = c.check_cluster_create self.scheme = c_schema.CLUSTER_SCHEMA api.plugin_base.setup_plugins() def test_cluster_create_v_plugin_vers(self): self._assert_create_object_validation( data={ 'name': 'testname', 'plugin_name': 'fake', 'hadoop_version': '1' }, bad_req_i=(1, "INVALID_REFERENCE", "Requested plugin 'fake' " "doesn't support version '1'"), ) def test_cluster_create_v_required(self): self._assert_create_object_validation( data={}, bad_req_i=(1, "VALIDATION_ERROR", u"'name' is a required property") ) self._assert_create_object_validation( data={ 'name': 'test-name' }, bad_req_i=(1, "VALIDATION_ERROR", u"'plugin_name' is a required property") ) self._assert_create_object_validation( data={ 'name': 'testname', 'plugin_name': 'fake' }, bad_req_i=(1, "VALIDATION_ERROR", u"'hadoop_version' is a required property") ) def test_cluster_create_v_types(self): data = { 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1" } self._assert_types(data) def test_cluster_create_v_name_base(self): data = { 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1" } self._assert_valid_name_hostname_validation(data) def test_cluster_create_v_unique_cl(self): data = { 'name': 'test', 'plugin_name': 'fake', 'hadoop_version': '0.1' } self._assert_create_object_validation( data=data, bad_req_i=(1, 'NAME_ALREADY_EXISTS', "Cluster with name 'test' already exists") ) def test_cluster_create_v_keypair_exists(self): self._assert_create_object_validation( data={ 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': 'wrong_keypair' }, bad_req_i=(1, 'NOT_FOUND', "Requested keypair 'wrong_keypair' not found") ) def test_cluster_create_v_keypair_type(self): self._assert_create_object_validation( data={ 'name': "test-name", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': '!'}, bad_req_i=(1, 'VALIDATION_ERROR', "user_keypair_id: '!' is not a 'valid_keypair_name'") ) def test_cluster_create_v_image_exists(self): self._assert_create_object_validation( data={ 'name': "test-name", 'plugin_name': "fake", 'hadoop_version': "0.1", 'default_image_id': '550e8400-e29b-41d4-a616-446655440000' }, bad_req_i=(1, 'INVALID_REFERENCE', "Requested image '550e8400-e29b-41d4-a616-446655440000'" " is not registered") ) def test_cluster_create_v_plugin_name_exists(self): self._assert_create_object_validation( data={ 'name': "test-name", 'plugin_name': "wrong_plugin", 'hadoop_version': "0.1", }, bad_req_i=(1, 'INVALID_REFERENCE', "Sahara doesn't contain plugin " "with name 'wrong_plugin'") ) def test_cluster_create_v_wrong_network(self): self._assert_create_object_validation( data={ 'name': "test-name", 'plugin_name': "fake", 'hadoop_version': "0.1", 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': '53a36917-ab9f-4589-' '94ce-b6df85a68332' }, bad_req_i=(1, 'NOT_FOUND', "Network 53a36917-ab9f-4589-" "94ce-b6df85a68332 not found") ) def test_cluster_create_v_missing_network(self): self._assert_create_object_validation( data={ 'name': "test-name", 'plugin_name': "fake", 'hadoop_version': "0.1", 'default_image_id': '550e8400-e29b-41d4-a716-446655440000' }, bad_req_i=(1, 'NOT_FOUND', "'neutron_management_network' field is not found") ) def test_cluster_create_v_long_instance_names(self): self._assert_create_object_validation( data={ 'name': "long-long-cluster-name", 'plugin_name': "fake", 'hadoop_version': "0.1", 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { "name": "long-long-long-very-long-node-group-name", "node_processes": ["namenode"], "flavor_id": "42", "count": 100, } ] }, bad_req_i=(1, 'INVALID_DATA', "Composite hostname long-long-cluster-name-long-long-" "long-very-long-node-group-name-100.novalocal " "in provisioned cluster exceeds maximum limit 64 " "characters") ) def test_cluster_create_v_cluster_configs(self): self._assert_cluster_configs_validation(True) def test_cluster_create_v_right_data(self): self._assert_create_object_validation( data={ 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': 'test_keypair', 'cluster_configs': { 'general': { u'Enable NTP service': True } }, 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca' } ) def test_cluster_create_v_default_image_required_tags(self): self._assert_cluster_default_image_tags_validation() def test_cluster_create_security_groups(self): self._assert_create_object_validation( data={ 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { "name": "nodegroup", "node_processes": ["namenode"], "flavor_id": "42", "count": 100, 'security_groups': ['group1', 'group2'], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca' } ] } ) def test_cluster_create_missing_floating_pool(self): self._assert_create_object_validation( data={ 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { "name": "ng1", "node_processes": ["namenode"], "flavor_id": "42", "count": 100, 'security_groups': ['group1', 'group2'], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca' }, { "name": "ng2", "node_processes": ["datanode"], "flavor_id": "42", "count": 100, 'security_groups': ['group1', 'group2'] } ] } ) def test_cluster_create_with_proxy_gateway(self): self._assert_create_object_validation( data={ 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { "name": "ng1", "node_processes": ["namenode"], "flavor_id": "42", "count": 100, 'security_groups': ['group1', 'group2'], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca', "is_proxy_gateway": True }, { "name": "ng2", "node_processes": ["datanode"], "flavor_id": "42", "count": 100, 'security_groups': ['group1', 'group2'] } ] } ) def test_cluster_create_security_groups_by_ids(self): self._assert_create_object_validation( data={ 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { "name": "nodegroup", "node_processes": ["namenode"], "flavor_id": "42", "count": 100, 'security_groups': ['2', '3'], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca' } ] } ) def test_cluster_missing_security_groups(self): self._assert_create_object_validation( data={ 'name': "testname", 'plugin_name': "fake", 'hadoop_version': "0.1", 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { "name": "nodegroup", "node_processes": ["namenode"], "flavor_id": "42", "count": 100, 'security_groups': ['group1', 'group3'], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca' } ] }, bad_req_i=(1, 'NOT_FOUND', "Security group 'group3' not found") ) def test_cluster_create_availability_zone(self): self._assert_create_object_validation( data={ 'name': 'testname', 'plugin_name': 'fake', 'hadoop_version': '0.1', 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { 'name': 'nodegroup', 'node_processes': ['namenode'], 'flavor_id': '42', 'count': 100, 'security_groups': [], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca', 'availability_zone': 'nova', 'volumes_per_node': 1, 'volumes_size': 1, 'volumes_availability_zone': 'nova' } ] } ) def test_cluster_create_wrong_availability_zone(self): self._assert_create_object_validation( data={ 'name': 'testname', 'plugin_name': 'fake', 'hadoop_version': '0.1', 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { 'name': 'nodegroup', 'node_processes': ['namenode'], 'flavor_id': '42', 'count': 100, 'security_groups': [], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca', 'availability_zone': 'nonexistent' } ] }, bad_req_i=(1, 'NOT_FOUND', "Nova availability zone 'nonexistent' not found") ) def test_cluster_create_wrong_volumes_availability_zone(self): self._assert_create_object_validation( data={ 'name': 'testname', 'plugin_name': 'fake', 'hadoop_version': '0.1', 'user_keypair_id': 'test_keypair', 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', 'neutron_management_network': 'd9a3bebc-f788-4b81-' '9a93-aa048022c1ca', 'node_groups': [ { 'name': 'nodegroup', 'node_processes': ['namenode'], 'flavor_id': '42', 'count': 100, 'security_groups': [], 'floating_ip_pool': 'd9a3bebc-f788-4b81-9a93-aa048022c1ca', 'volumes_per_node': 1, 'volumes_availability_zone': 'nonexistent' } ] }, bad_req_i=(1, 'NOT_FOUND', "Cinder availability zone 'nonexistent' not found") ) class TestClusterCreateFlavorValidation(base.SaharaWithDbTestCase): """Tests for valid flavor on cluster create. The following use cases for flavors during cluster create are validated: * Flavor id defined in a node group template and used in a cluster template. * Flavor id defined in node groups on cluster create. * Both node groups and cluster template defined on cluster create. * Node groups with node group template defined on cluster create. """ def setUp(self): super(TestClusterCreateFlavorValidation, self).setUp() self.override_config('plugins', ['fake']) modules = [ "sahara.service.validations.base.check_plugin_name_exists", "sahara.service.validations.base.check_plugin_supports_version", "sahara.service.validations.base._get_plugin_configs", "sahara.service.validations.base.check_node_processes", ] self.patchers = [] for module in modules: patch = mock.patch(module) patch.start() self.patchers.append(patch) nova_p = mock.patch("sahara.utils.openstack.nova.client") nova = nova_p.start() self.patchers.append(nova_p) nova().flavors.list.side_effect = u._get_flavors_list api.plugin_base.setup_plugins() def tearDown(self): u.stop_patch(self.patchers) super(TestClusterCreateFlavorValidation, self).tearDown() def _create_node_group_template(self, flavor='42'): ng_tmpl = { "plugin_name": "fake", "hadoop_version": "0.1", "node_processes": ["namenode"], "name": "master", "flavor_id": flavor } return api.create_node_group_template(ng_tmpl).id def _create_cluster_template(self, ng_id): cl_tmpl = { "plugin_name": "fake", "hadoop_version": "0.1", "node_groups": [ {"name": "master", "count": 1, "node_group_template_id": "%s" % ng_id} ], "name": "test-template" } return api.create_cluster_template(cl_tmpl).id def test_cluster_create_v_correct_flavor(self): ng_id = self._create_node_group_template(flavor='42') ctmpl_id = self._create_cluster_template(ng_id) data = { "name": "testname", "plugin_name": "fake", "hadoop_version": "0.1", "cluster_template_id": '%s' % ctmpl_id, "neutron_management_network": "d9a3bebc-f788-4b81-" "9a93-aa048022c1ca", 'default_image_id': '550e8400-e29b-41d4-a716-446655440000' } patchers = u.start_patch(False) c.check_cluster_create(data) u.stop_patch(patchers) data1 = { "name": "testwithnodegroups", "plugin_name": "fake", "hadoop_version": "0.1", "neutron_management_network": "d9a3bebc-f788-4b81-" "9a93-aa048022c1ca", "node_groups": [ { "name": "allinone", "count": 1, "flavor_id": "42", "node_processes": [ "namenode", "jobtracker", "datanode", "tasktracker" ] } ], 'default_image_id': '550e8400-e29b-41d4-a716-446655440000' } patchers = u.start_patch(False) c.check_cluster_create(data1) u.stop_patch(patchers) def test_cluster_create_v_invalid_flavor(self): ng_id = self._create_node_group_template(flavor='10') ctmpl_id = self._create_cluster_template(ng_id) data = { "name": "testname", "plugin_name": "fake", "hadoop_version": "0.1", "cluster_template_id": '%s' % ctmpl_id, 'default_image_id': '550e8400-e29b-41d4-a716-446655440000' } data1 = { "name": "testwithnodegroups", "plugin_name": "fake", "hadoop_version": "0.1", "neutron_management_network": "d9a3bebc-f788-4b81-" "9a93-aa048022c1ca", "node_groups": [ { "name": "allinone", "count": 1, "flavor_id": "10", "node_processes": [ "namenode", "resourcemanager", "datanode", "nodemanager" ] } ], 'default_image_id': '550e8400-e29b-41d4-a716-446655440000' } for values in [data, data1]: with testtools.ExpectedException( exceptions.NotFoundException): patchers = u.start_patch(False) try: c.check_cluster_create(values) except exceptions.NotFoundException as e: message = six.text_type(e).split('\n')[0] self.assertEqual("Requested flavor '10' not found", message) raise e finally: u.stop_patch(patchers) def test_cluster_create_cluster_tmpl_node_group_mixin(self): ng_id = self._create_node_group_template(flavor='10') ctmpl_id = self._create_cluster_template(ng_id) data = { "name": "testtmplnodegroups", "plugin_name": "fake", "hadoop_version": "0.1", "cluster_template_id": '%s' % ctmpl_id, "neutron_management_network": "d9a3bebc-f788-4b81-" "9a93-aa048022c1ca", 'default_image_id': '550e8400-e29b-41d4-a716-446655440000', "node_groups": [ { "name": "allinone", "count": 1, "flavor_id": "42", "node_processes": [ "namenode", "resourcemanager", "datanode", "nodemanager" ] } ] } patchers = u.start_patch(False) c.check_cluster_create(data) u.stop_patch(patchers) def test_cluster_create_node_group_tmpl_mixin(self): ng_id = self._create_node_group_template(flavor='23') data = { "name": "testtmplnodegroups", "plugin_name": "fake", "hadoop_version": "0.1", "neutron_management_network": "d9a3bebc-f788-4b81-" "9a93-aa048022c1ca", "node_groups": [ { "node_group_template_id": '%s' % ng_id, "name": "allinone", "count": 1, "flavor_id": "42", "node_processes": [ "namenode", "resourcemanager", "datanode", "nodemanager" ] }, ], 'default_image_id': '550e8400-e29b-41d4-a716-446655440000' } with testtools.ExpectedException(exceptions.NotFoundException): patchers = u.start_patch(False) try: c.check_cluster_create(data) except exceptions.NotFoundException as e: message = six.text_type(e).split('\n')[0] self.assertEqual("Requested flavor '23' not found", message) raise e finally: u.stop_patch(patchers) sahara-12.0.0/sahara/tests/unit/service/test_periodic.py0000664000175000017500000002530713656752032023322 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime from unittest import mock from oslo_utils import timeutils from sahara.conductor import manager from sahara import context from sahara.service.castellan import config as castellan import sahara.service.periodic as p import sahara.tests.unit.base as base from sahara.tests.unit.conductor.manager import test_clusters as tc from sahara.tests.unit.conductor.manager import test_edp as te from sahara.utils import cluster as c_u class TestPeriodicBack(base.SaharaWithDbTestCase): def setUp(self): super(TestPeriodicBack, self).setUp() self.api = manager.ConductorManager() castellan.validate_config() @mock.patch('sahara.service.edp.job_manager.get_job_status') def test_job_status_update(self, get_job_status): ctx = context.ctx() job = self.api.job_create(ctx, te.SAMPLE_JOB) ds = self.api.data_source_create(ctx, te.SAMPLE_DATA_SOURCE) self._create_job_execution({"end_time": datetime.datetime.now(), "id": 1}, job, ds, ds) self._create_job_execution({"end_time": None, "id": 2}, job, ds, ds) self._create_job_execution({"end_time": None, "id": 3}, job, ds, ds) p._make_periodic_tasks().update_job_statuses(None) self.assertEqual(2, get_job_status.call_count) get_job_status.assert_has_calls([mock.call(u'2'), mock.call(u'3')]) @mock.patch('sahara.service.trusts.use_os_admin_auth_token') @mock.patch('sahara.service.api.v10.terminate_cluster') def test_transient_cluster_terminate(self, terminate_cluster, use_os_admin_auth_token): timeutils.set_time_override(datetime.datetime(2005, 2, 1, 0, 0)) ctx = context.ctx() job = self.api.job_create(ctx, te.SAMPLE_JOB) ds = self.api.data_source_create(ctx, te.SAMPLE_DATA_SOURCE) self._make_cluster('1') self._make_cluster('2') self._create_job_execution({"end_time": timeutils.utcnow(), "id": 1, "cluster_id": "1"}, job, ds, ds) self._create_job_execution({"end_time": None, "id": 2, "cluster_id": "2"}, job, ds, ds) self._create_job_execution({"end_time": None, "id": 3, "cluster_id": "2"}, job, ds, ds) timeutils.set_time_override(datetime.datetime(2005, 2, 1, 0, 1)) p._make_periodic_tasks().terminate_unneeded_transient_clusters(None) self.assertEqual(1, terminate_cluster.call_count) terminate_cluster.assert_has_calls([mock.call(u'1')]) self.assertEqual(1, use_os_admin_auth_token.call_count) @mock.patch('sahara.service.api.v10.terminate_cluster') def test_not_transient_cluster_does_not_terminate(self, terminate_cluster): timeutils.set_time_override(datetime.datetime(2005, 2, 1, 0, 0)) self._make_cluster('1', is_transient=False) timeutils.set_time_override(datetime.datetime(2005, 2, 1, 0, 1)) p._make_periodic_tasks().terminate_unneeded_transient_clusters(None) self.assertEqual(0, terminate_cluster.call_count) @mock.patch('sahara.service.api.v10.terminate_cluster') def test_transient_cluster_not_killed_too_early(self, terminate_cluster): timeutils.set_time_override(datetime.datetime(2005, 2, 1, second=0)) self._make_cluster('1') timeutils.set_time_override(datetime.datetime(2005, 2, 1, second=20)) p._make_periodic_tasks().terminate_unneeded_transient_clusters(None) self.assertEqual(0, terminate_cluster.call_count) @mock.patch('sahara.service.trusts.use_os_admin_auth_token') @mock.patch('sahara.service.api.v10.terminate_cluster') def test_transient_cluster_killed_in_time(self, terminate_cluster, use_os_admin_auth_token): timeutils.set_time_override(datetime.datetime(2005, 2, 1, second=0)) self._make_cluster('1') timeutils.set_time_override(datetime.datetime(2005, 2, 1, second=40)) p._make_periodic_tasks().terminate_unneeded_transient_clusters(None) self.assertEqual(1, terminate_cluster.call_count) terminate_cluster.assert_has_calls([mock.call(u'1')]) self.assertEqual(1, use_os_admin_auth_token.call_count) @mock.patch('sahara.service.api.v10.terminate_cluster') def test_incomplete_cluster_not_killed_too_early(self, terminate_cluster): self.override_config('cleanup_time_for_incomplete_clusters', 1) timeutils.set_time_override(datetime.datetime(2005, 2, 1, second=0)) self._make_cluster('1', c_u.CLUSTER_STATUS_SPAWNING) timeutils.set_time_override(datetime.datetime( 2005, 2, 1, minute=59, second=50)) p._make_periodic_tasks().terminate_incomplete_clusters(None) self.assertEqual(0, terminate_cluster.call_count) @mock.patch('sahara.service.trusts.use_os_admin_auth_token') @mock.patch('sahara.service.api.v10.terminate_cluster') def test_incomplete_cluster_killed_in_time(self, terminate_cluster, use_os_admin_auth_token): self.override_config('cleanup_time_for_incomplete_clusters', 1) timeutils.set_time_override(datetime.datetime(2005, 2, 1, second=0)) self._make_cluster('1', c_u.CLUSTER_STATUS_SPAWNING) timeutils.set_time_override(datetime.datetime( 2005, 2, 1, hour=1, second=10)) p._make_periodic_tasks().terminate_incomplete_clusters(None) self.assertEqual(1, terminate_cluster.call_count) terminate_cluster.assert_has_calls([mock.call(u'1')]) self.assertEqual(1, use_os_admin_auth_token.call_count) @mock.patch('sahara.service.api.v10.terminate_cluster') def test_active_cluster_not_killed_as_inactive( self, terminate_cluster): self.override_config('cleanup_time_for_incomplete_clusters', 1) timeutils.set_time_override(datetime.datetime(2005, 2, 1, second=0)) self._make_cluster('1') timeutils.set_time_override(datetime.datetime( 2005, 2, 1, hour=1, second=10)) p._make_periodic_tasks().terminate_incomplete_clusters(None) self.assertEqual(0, terminate_cluster.call_count) @mock.patch("sahara.utils.proxy.proxy_domain_users_list") @mock.patch("sahara.utils.proxy.proxy_user_delete") @mock.patch("sahara.service.periodic.conductor.job_execution_get") def test_check_for_zombie_proxy_users(self, mock_conductor_je_get, mock_user_delete, mock_users_list): user_0 = mock.MagicMock() user_0.name = "admin" user_0.id = 0 user_1 = mock.MagicMock() user_1.name = "job_0" user_1.id = 1 user_2 = mock.MagicMock() user_2.name = "job_1" user_2.id = 2 mock_users_list.return_value = [user_0, user_1, user_2] je_0 = mock.MagicMock() je_0.id = 0 je_0.info = {"status": "KILLED"} je_1 = mock.MagicMock() je_1.id = 1 je_1.info = {"status": "WAITING"} mock_conductor_je_get.side_effect = [je_0, je_1] p._make_periodic_tasks().check_for_zombie_proxy_users(None) mock_user_delete.assert_called_once_with(user_id=1) @mock.patch( 'sahara.service.health.verification_base.validate_verification_start') @mock.patch('sahara.service.api.v10.update_cluster') def test_run_verifications_executed(self, cluster_update, ver_valid): self._make_cluster('1') p._make_periodic_tasks().run_verifications(None) self.assertEqual(1, ver_valid.call_count) cluster_update.assert_called_once_with( '1', {'verification': {'status': 'START'}}) @mock.patch( 'sahara.service.health.verification_base.validate_verification_start') @mock.patch('sahara.service.api.v10.update_cluster') def test_run_verifications_not_executed(self, cluster_update, ver_valid): self._make_cluster('1', status=c_u.CLUSTER_STATUS_ERROR) p._make_periodic_tasks().run_verifications(None) ver_valid.assert_not_called() cluster_update.assert_not_called() @mock.patch("sahara.service.periodic.threadgroup") @mock.patch("sahara.service.periodic.CONF") def test_setup_enabled(self, mock_conf, mock_thread_group): mock_conf.periodic_enable = True mock_conf.periodic_fuzzy_delay = 20 mock_conf.periodic_interval_max = 30 mock_conf.periodic_workers_number = 1 mock_conf.periodic_coordinator_backend_url = '' add_timer = mock_thread_group.ThreadGroup().add_dynamic_timer p.setup() self.assertTrue(add_timer._mock_called) @mock.patch("sahara.service.periodic.threadgroup") @mock.patch("sahara.service.periodic.CONF") def test_setup_disabled(self, mock_conf, mock_thread_group): mock_conf.periodic_enable = False add_timer = mock_thread_group.ThreadGroup().add_dynamic_timer p.setup() self.assertFalse(add_timer._mock_called) def _make_cluster(self, id_name, status=c_u.CLUSTER_STATUS_ACTIVE, is_transient=True): ctx = context.ctx() c = tc.SAMPLE_CLUSTER.copy() c["is_transient"] = is_transient c["status"] = status c["id"] = id_name c["name"] = id_name c['updated_at'] = timeutils.utcnow() c['trust_id'] = 'DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF' self.api.cluster_create(ctx, c) def _create_job_execution(self, values, job, input, output): values.update({"job_id": job['id'], "input_id": input['id'], "output_id": output['id']}) self.api.job_execution_create(context.ctx(), values) sahara-12.0.0/sahara/tests/unit/service/test_sessions.py0000664000175000017500000001466613656752032023400 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from keystoneauth1 import session as keystone from sahara import exceptions as ex from sahara.service import sessions from sahara.tests.unit import base class TestSessionCache(base.SaharaTestCase): def test_get_session(self): sc = sessions.SessionCache() session = sc.get_session() self.assertIsInstance(session, keystone.Session) self.assertRaises(ex.SaharaException, sc.get_session, session_type='bad service') @mock.patch('keystoneauth1.session.Session') def test_get_keystone_session(self, keystone_session): sc = sessions.SessionCache() self.override_config('ca_file', '/some/cacert', group='keystone') self.override_config('api_insecure', False, group='keystone') sc.get_session(sessions.SESSION_TYPE_KEYSTONE) keystone_session.assert_called_once_with(verify='/some/cacert') sc = sessions.SessionCache() keystone_session.reset_mock() self.override_config('ca_file', None, group='keystone') self.override_config('api_insecure', True, group='keystone') sc.get_session(sessions.SESSION_TYPE_KEYSTONE) keystone_session.assert_called_once_with(verify=False) keystone_session.reset_mock() sc.get_session(sessions.SESSION_TYPE_KEYSTONE) self.assertFalse(keystone_session.called) @mock.patch('keystoneauth1.session.Session') def test_get_nova_session(self, keystone_session): sc = sessions.SessionCache() self.override_config('ca_file', '/some/cacert', group='nova') self.override_config('api_insecure', False, group='nova') sc.get_session(sessions.SESSION_TYPE_NOVA) keystone_session.assert_called_once_with(verify='/some/cacert') sc = sessions.SessionCache() keystone_session.reset_mock() self.override_config('ca_file', None, group='nova') self.override_config('api_insecure', True, group='nova') sc.get_session(sessions.SESSION_TYPE_NOVA) keystone_session.assert_called_once_with(verify=False) keystone_session.reset_mock() sc.get_session(sessions.SESSION_TYPE_NOVA) self.assertFalse(keystone_session.called) @mock.patch('keystoneauth1.session.Session') def test_get_cinder_session(self, keystone_session): sc = sessions.SessionCache() self.override_config('ca_file', '/some/cacert', group='cinder') self.override_config('api_insecure', False, group='cinder') sc.get_session(sessions.SESSION_TYPE_CINDER) keystone_session.assert_called_once_with(verify='/some/cacert') sc = sessions.SessionCache() keystone_session.reset_mock() self.override_config('ca_file', None, group='cinder') self.override_config('api_insecure', True, group='cinder') sc.get_session(sessions.SESSION_TYPE_CINDER) keystone_session.assert_called_once_with(verify=False) keystone_session.reset_mock() sc.get_session(sessions.SESSION_TYPE_CINDER) self.assertFalse(keystone_session.called) @mock.patch('keystoneauth1.session.Session') def test_get_neutron_session(self, keystone_session): sc = sessions.SessionCache() self.override_config('ca_file', '/some/cacert', group='neutron') self.override_config('api_insecure', False, group='neutron') sc.get_session(sessions.SESSION_TYPE_NEUTRON) keystone_session.assert_called_once_with(verify='/some/cacert') sc = sessions.SessionCache() keystone_session.reset_mock() self.override_config('ca_file', None, group='neutron') self.override_config('api_insecure', True, group='neutron') sc.get_session(sessions.SESSION_TYPE_NEUTRON) keystone_session.assert_called_once_with(verify=False) keystone_session.reset_mock() sc.get_session(sessions.SESSION_TYPE_NEUTRON) self.assertFalse(keystone_session.called) @mock.patch('keystoneauth1.session.Session') def test_get_glance_session(self, keystone_session): sc = sessions.SessionCache() self.override_config('ca_file', '/some/cacert', group='glance') self.override_config('api_insecure', False, group='glance') sc.get_session(sessions.SESSION_TYPE_GLANCE) keystone_session.assert_called_once_with(verify='/some/cacert') sc = sessions.SessionCache() keystone_session.reset_mock() self.override_config('ca_file', None, group='glance') self.override_config('api_insecure', True, group='glance') sc.get_session(sessions.SESSION_TYPE_GLANCE) keystone_session.assert_called_once_with(verify=False) keystone_session.reset_mock() sc.get_session(sessions.SESSION_TYPE_GLANCE) self.assertFalse(keystone_session.called) @mock.patch('keystoneauth1.session.Session') def test_get_heat_session(self, keystone_session): sc = sessions.SessionCache() self.override_config('ca_file', '/some/cacert', group='heat') self.override_config('api_insecure', False, group='heat') sc.get_session(sessions.SESSION_TYPE_HEAT) keystone_session.assert_called_once_with(verify='/some/cacert') sc = sessions.SessionCache() keystone_session.reset_mock() self.override_config('ca_file', None, group='heat') self.override_config('api_insecure', True, group='heat') sc.get_session(sessions.SESSION_TYPE_HEAT) keystone_session.assert_called_once_with(verify=False) keystone_session.reset_mock() sc.get_session(sessions.SESSION_TYPE_HEAT) self.assertFalse(keystone_session.called) @mock.patch('keystoneauth1.session.Session') def test_insecure_session(self, session): sc = sessions.SessionCache() sc.get_session(sessions.SESSION_TYPE_INSECURE) session.assert_called_once_with(verify=False) sahara-12.0.0/sahara/tests/unit/service/health/0000775000175000017500000000000013656752227021357 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/health/__init__.py0000664000175000017500000000000013656752032023450 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/health/test_verification_base.py0000664000175000017500000001434213656752032026442 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import six import testtools from sahara import conductor from sahara import context from sahara import exceptions from sahara.plugins import health_check_base from sahara.service.health import verification_base from sahara.tests.unit import base from sahara.tests.unit.conductor import test_api class Check(health_check_base.BasicHealthCheck): def check_health(self): return "No criminality" def get_health_check_name(self): return "James bond check" def is_available(self): return True class RedCheck(Check): def check_health(self): raise health_check_base.RedHealthError("Ooouch!") class YellowCheck(Check): def check_health(self): raise health_check_base.YellowHealthError("No problems, boss!") class TestVerifications(base.SaharaWithDbTestCase): def setUp(self): super(TestVerifications, self).setUp() self.api = conductor.API def _cluster_sample(self): ctx = context.ctx() cluster = self.api.cluster_create(ctx, test_api.SAMPLE_CLUSTER) return cluster @testtools.skip("Story 2007450 - http://sqlalche.me/e/bhk3") @mock.patch('sahara.plugins.health_check_base.get_health_checks') def test_verification_start(self, get_health_checks): cluster = self._cluster_sample() get_health_checks.return_value = [Check] verification_base.handle_verification(cluster, { 'verification': {'status': 'START'}}) cluster = self.api.cluster_get(context.ctx(), cluster) ver = cluster.verification self.assertEqual('GREEN', ver['status']) self.assertEqual(1, len(ver['checks'])) self.assertEqual('No criminality', ver.checks[0]['description']) id = ver['id'] get_health_checks.return_value = [YellowCheck, Check, Check] verification_base.handle_verification(cluster, { 'verification': {'status': 'START'}}) cluster = self.api.cluster_get(context.ctx(), cluster) ver = cluster.verification self.assertEqual('YELLOW', ver['status']) self.assertEqual(3, len(ver['checks'])) self.assertNotEqual(ver['id'], id) get_health_checks.return_value = [RedCheck, YellowCheck] verification_base.handle_verification(cluster, { 'verification': {'status': 'START'}}) cluster = self.api.cluster_get(context.ctx(), cluster) ver = cluster.verification self.assertEqual('RED', ver['status']) self.assertEqual(2, len(ver['checks'])) self.assertNotEqual(ver['id'], id) self.assertEqual("James bond check", ver['checks'][0]['name']) def _validate_exception(self, exc, expected_message): message = six.text_type(exc) # removing Error ID message = message.split('\n')[0] self.assertEqual(expected_message, message) @testtools.skip("Story 2007450 - http://sqlalche.me/e/bhk3") def test_conductor_crud_verifications(self): ctx = context.ctx() try: self.api.cluster_verification_add( ctx, '1', values={'status': 'name'}) except exceptions.NotFoundException as e: self._validate_exception(e, "Cluster id '1' not found!") cl = self._cluster_sample() ver = self.api.cluster_verification_add( ctx, cl.id, values={'status': 'GREAT!'}) ver = self.api.cluster_verification_get(ctx, ver['id']) self.assertEqual('GREAT!', ver['status']) self.api.cluster_verification_update(ctx, ver['id'], values={'status': "HEY!"}) ver = self.api.cluster_verification_get(ctx, ver['id']) self.assertEqual('HEY!', ver['status']) self.assertIsNone( self.api.cluster_verification_delete(ctx, ver['id'])) try: self.api.cluster_verification_delete(ctx, ver['id']) except exceptions.NotFoundException as e: self._validate_exception( e, "Verification id '%s' not found!" % ver['id']) try: self.api.cluster_verification_update( ctx, ver['id'], values={'status': "ONE MORE"}) except exceptions.NotFoundException as e: self._validate_exception( e, "Verification id '%s' not found!" % ver['id']) self.assertIsNone(self.api.cluster_verification_get(ctx, ver['id'])) @testtools.skip("Story 2007450 - http://sqlalche.me/e/bhk3") def test_conductor_crud_health_checks(self): ctx = context.ctx() try: self.api.cluster_health_check_add( ctx, '1', values={'status': 'status'}) except exceptions.NotFoundException as e: self._validate_exception(e, "Verification id '1' not found!") cl = self._cluster_sample() vid = self.api.cluster_verification_add( ctx, cl.id, values={'status': 'GREAT!'})['id'] hc = self.api.cluster_health_check_add(ctx, vid, {'status': "Sah"}) hc = self.api.cluster_health_check_get(ctx, hc['id']) self.assertEqual('Sah', hc['status']) hc = self.api.cluster_health_check_update( ctx, hc['id'], {'status': "ara"}) hc = self.api.cluster_health_check_get(ctx, hc['id']) self.assertEqual('ara', hc['status']) self.api.cluster_verification_delete(ctx, vid) try: hc = self.api.cluster_health_check_update( ctx, hc['id'], {'status': "rulez!"}) except exceptions.NotFoundException as e: self._validate_exception( e, "Health check id '%s' not found!" % hc['id']) self.assertIsNone(self.api.cluster_health_check_get(ctx, hc['id'])) sahara-12.0.0/sahara/tests/unit/service/test_volumes.py0000664000175000017500000001520613656752032023213 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from cinderclient.v2 import volumes as vol_v2 from cinderclient.v3 import volumes as vol_v3 from sahara import exceptions as ex from sahara.service import volumes from sahara.tests.unit import base class TestAttachVolume(base.SaharaWithDbTestCase): @mock.patch('sahara.service.engine.Engine.get_node_group_image_username') def test_mount_volume(self, p_get_username): p_get_username.return_value = 'root' instance = self._get_instance() execute_com = instance.remote().execute_command self.assertIsNone(volumes._mount_volume(instance, '123', '456', False)) self.assertEqual(3, execute_com.call_count) execute_com.side_effect = ex.RemoteCommandException('cmd') self.assertRaises(ex.RemoteCommandException, volumes._mount_volume, instance, '123', '456', False) @mock.patch('sahara.conductor.manager.ConductorManager.cluster_get') @mock.patch('cinderclient.v2.volumes.Volume.delete') @mock.patch('cinderclient.v2.volumes.Volume.detach') @mock.patch('sahara.utils.openstack.cinder.get_volume') def test_detach_volumes_v2(self, p_get_volume, p_detach, p_delete, p_cond): class Instance(object): def __init__(self): self.instance_id = '123454321' self.volumes = [123] self.instance_name = 'spam' instance = Instance() p_get_volume.return_value = vol_v2.Volume(None, {'id': '123', 'status': 'available'}) p_detach.return_value = None p_delete.return_value = None self.assertIsNone( volumes.detach_from_instance(instance)) @mock.patch('sahara.conductor.manager.ConductorManager.cluster_get') @mock.patch('cinderclient.v3.volumes.Volume.delete') @mock.patch('cinderclient.v3.volumes.Volume.detach') @mock.patch('sahara.utils.openstack.cinder.get_volume') def test_detach_volumes_v3(self, p_get_volume, p_detach, p_delete, p_cond): class Instance(object): def __init__(self): self.instance_id = '123454321' self.volumes = [123] self.instance_name = 'spam' instance = Instance() p_get_volume.return_value = vol_v3.Volume(None, {'id': '123', 'status': 'available'}) p_detach.return_value = None p_delete.return_value = None self.assertIsNone( volumes.detach_from_instance(instance)) def _get_instance(self): inst_remote = mock.MagicMock() inst_remote.execute_command.return_value = 0 inst_remote.__enter__.return_value = inst_remote inst = mock.MagicMock() inst.remote.return_value = inst_remote return inst def test_find_instance_volume_devices(self): instance = self._get_instance() ex_cmd = instance.remote().execute_command attached_info = '/dev/vda /dev/vda1 /dev/vdb /dev/vdc' mounted_info = '/dev/vda1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (2, "")] diff = volumes._find_instance_devices(instance) self.assertEqual(['/dev/vdb', '/dev/vdc'], diff) attached_info = '/dev/vda /dev/vda1 /dev/vdb /dev/vdb1 /dev/vdb2' mounted_info = '/dev/vda1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (2, "")] diff = volumes._find_instance_devices(instance) self.assertEqual(['/dev/vdb'], diff) attached_info = '/dev/vda /dev/vda1 /dev/vdb /dev/vdb1 /dev/vdb2' mounted_info = '/dev/vda1 /dev/vdb1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (2, "")] diff = volumes._find_instance_devices(instance) self.assertEqual(['/dev/vdb2'], diff) attached_info = '/dev/vda /dev/vda1 /dev/vdb /dev/vdb1 /dev/vdb2' mounted_info = '/dev/vda1 /dev/vdb2' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (2, "")] diff = volumes._find_instance_devices(instance) self.assertEqual(['/dev/vdb1'], diff) attached_info = '/dev/vda /dev/vda1 /dev/vdb' mounted_info = '/dev/vda1 /dev/vdb' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (2, "")] diff = volumes._find_instance_devices(instance) self.assertEqual([], diff) attached_info = '/dev/vda /dev/vda1 /dev/vdb' mounted_info = '/dev/vda1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (0, "/dev/vdb")] diff = volumes._find_instance_devices(instance) self.assertEqual([], diff) attached_info = '/dev/vda /dev/vda1 /dev/vdb' mounted_info = '/dev/vda1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (0, "/dev/vdb")] diff = volumes._find_instance_devices(instance) self.assertEqual([], diff) attached_info = '/dev/vda /dev/nbd1' mounted_info = '/dev/nbd1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (2, "")] diff = volumes._find_instance_devices(instance) self.assertEqual(['/dev/vda'], diff) attached_info = '/dev/nbd1 /dev/nbd2' mounted_info = '/dev/nbd1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (2, ""), (2, "")] diff = volumes._find_instance_devices(instance) self.assertEqual(['/dev/nbd2'], diff) attached_info = '/dev/nbd1 /dev/nbd2' mounted_info = '/dev/nbd1' ex_cmd.side_effect = [(0, attached_info), (0, mounted_info), (0, "/dev/nbd2")] diff = volumes._find_instance_devices(instance) self.assertEqual([], diff) sahara-12.0.0/sahara/tests/unit/service/api/0000775000175000017500000000000013656752227020663 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/api/__init__.py0000664000175000017500000000000013656752032022754 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/api/v2/0000775000175000017500000000000013656752227021212 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/api/v2/__init__.py0000664000175000017500000000000013656752032023303 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/api/v2/test_images.py0000664000175000017500000000463313656752032024070 0ustar zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service.api.v2 import images from sahara.tests.unit import base class TestImageApi(base.SaharaTestCase): def SetUp(self): super(TestImageApi, self).SetUp() @mock.patch('sahara.utils.openstack.images.SaharaImageManager') def test_get_image_tags(self, mock_manager): image = mock.Mock() manager = mock.Mock() manager.get.return_value = mock.Mock(tags=['foo', 'bar', 'baz']) mock_manager.return_value = manager self.assertEqual(['foo', 'bar', 'baz'], images.get_image_tags(image)) @mock.patch('sahara.utils.openstack.images.SaharaImageManager') def test_set_image_tags(self, mock_manager): def _tag(image, to_add): return tags.append('qux') def _untag(image, to_remove): return tags.remove('bar') expected_tags = ['foo', 'baz', 'qux'] tags = ['foo', 'bar', 'baz'] image = mock.Mock() manager = mock.Mock() manager.get.return_value = mock.Mock(tags=tags) manager.tag.side_effect = _tag manager.untag.side_effect = _untag mock_manager.return_value = manager self.assertEqual(expected_tags, images.set_image_tags(image, expected_tags).tags) @mock.patch('sahara.utils.openstack.images.SaharaImageManager') def test_remove_image_tags(self, mock_manager): def _untag(image, to_remove): for i in range(len(to_remove)): actual_tags.pop() return actual_tags actual_tags = ['foo', 'bar', 'baz'] image = mock.Mock() manager = mock.Mock() manager.get.return_value = mock.Mock(tags=actual_tags) manager.untag.side_effect = _untag mock_manager.return_value = manager self.assertEqual([], images.remove_image_tags(image).tags) sahara-12.0.0/sahara/tests/unit/service/api/v2/test_clusters.py0000664000175000017500000003170013656752032024462 0ustar zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import oslo_messaging import six import testtools from sahara import conductor as cond from sahara import context from sahara import exceptions as exc from sahara.plugins import base as pl_base from sahara.plugins import utils as u from sahara.service import api as service_api from sahara.service.api.v2 import clusters as api from sahara.tests.unit import base import sahara.tests.unit.service.api.v2.base as api_base from sahara.utils import cluster as c_u conductor = cond.API class FakeOps(object): def __init__(self, calls_order): self.calls_order = calls_order def provision_cluster(self, id): self.calls_order.append('ops.provision_cluster') cluster = conductor.cluster_get(context.ctx(), id) target_count = {} for node_group in cluster.node_groups: target_count[node_group.id] = node_group.count for node_group in cluster.node_groups: conductor.node_group_update(context.ctx(), node_group, {"count": 0}) for node_group in cluster.node_groups: for i in range(target_count[node_group.id]): inst = { "instance_id": node_group.name + '_' + str(i), "instance_name": node_group.name + '_' + str(i) } conductor.instance_add(context.ctx(), node_group, inst) conductor.cluster_update( context.ctx(), id, {'status': c_u.CLUSTER_STATUS_ACTIVE}) def provision_scaled_cluster(self, id, to_be_enlarged, node_group_instance_map=None): self.calls_order.append('ops.provision_scaled_cluster') cluster = conductor.cluster_get(context.ctx(), id) # Set scaled to see difference between active and scaled for (ng, count) in six.iteritems(to_be_enlarged): instances_to_delete = [] if node_group_instance_map: if ng in node_group_instance_map: instances_to_delete = self._get_instance( cluster, node_group_instance_map[ng]) for instance in instances_to_delete: conductor.instance_remove(context.ctx(), instance) conductor.node_group_update(context.ctx(), ng, {'count': count}) conductor.cluster_update(context.ctx(), id, {'status': 'Scaled'}) def update_keypair(self, id): self.calls_order.append('ops.update_keypair') cluster = conductor.cluster_get(context.ctx(), id) keypair_name = cluster.user_keypair_id nova_p = mock.patch("sahara.utils.openstack.nova.client") nova = nova_p.start() key = nova.get_keypair(keypair_name) nodes = u.get_instances(cluster) for instance in nodes: remote = mock.Mock() remote.execute_command( "echo {keypair} >> ~/.ssh/authorized_keys".format( keypair=key.public_key)) remote.reset_mock() def terminate_cluster(self, id, force): self.calls_order.append('ops.terminate_cluster') def _get_instance(self, cluster, instances_to_delete): instances = [] for node_group in cluster.node_groups: for instance in node_group.instances: if instance.instance_id in instances_to_delete: instances.append(instance) return instances class TestClusterApi(base.SaharaWithDbTestCase): def setUp(self): super(TestClusterApi, self).setUp() self.calls_order = [] self.override_config('plugins', ['fake']) pl_base.PLUGINS = api_base.FakePluginManager(self.calls_order) service_api.setup_api(FakeOps(self.calls_order)) oslo_messaging.notify.notifier.Notifier.info = mock.Mock() self.ctx = context.ctx() @mock.patch('sahara.service.quotas.check_cluster', return_value=None) def test_create_cluster_success(self, check_cluster): cluster = api.create_cluster(api_base.SAMPLE_CLUSTER) self.assertEqual(1, check_cluster.call_count) result_cluster = api.get_cluster(cluster.id) self.assertEqual(c_u.CLUSTER_STATUS_ACTIVE, result_cluster.status) expected_count = { 'ng_1': 1, 'ng_2': 3, 'ng_3': 1, } ng_count = 0 for ng in result_cluster.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(3, ng_count) api.terminate_cluster(result_cluster.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'ops.terminate_cluster'], self.calls_order) @mock.patch('sahara.service.quotas.check_cluster', return_value=None) def test_create_multiple_clusters_success(self, check_cluster): MULTIPLE_CLUSTERS = api_base.SAMPLE_CLUSTER.copy() MULTIPLE_CLUSTERS['count'] = 2 clusters = api.create_multiple_clusters(MULTIPLE_CLUSTERS) self.assertEqual(2, check_cluster.call_count) result_cluster1 = api.get_cluster( clusters['clusters'][0]['cluster']['id']) result_cluster2 = api.get_cluster( clusters['clusters'][1]['cluster']['id']) self.assertEqual(c_u.CLUSTER_STATUS_ACTIVE, result_cluster1.status) self.assertEqual(c_u.CLUSTER_STATUS_ACTIVE, result_cluster2.status) expected_count = { 'ng_1': 1, 'ng_2': 3, 'ng_3': 1, } ng_count = 0 for ng in result_cluster1.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(3, ng_count) api.terminate_cluster(result_cluster1.id) api.terminate_cluster(result_cluster2.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'ops.terminate_cluster', 'ops.terminate_cluster'], self.calls_order) @mock.patch('sahara.service.quotas.check_cluster') def test_create_multiple_clusters_failed(self, check_cluster): MULTIPLE_CLUSTERS = api_base.SAMPLE_CLUSTER.copy() MULTIPLE_CLUSTERS['count'] = 2 check_cluster.side_effect = exc.QuotaException( 'resource', 'requested', 'available') with testtools.ExpectedException(exc.QuotaException): api.create_cluster(api_base.SAMPLE_CLUSTER) self.assertEqual('Error', api.get_clusters()[0].status) @mock.patch('sahara.service.quotas.check_cluster') def test_create_cluster_failed(self, check_cluster): check_cluster.side_effect = exc.QuotaException( 'resource', 'requested', 'available') with testtools.ExpectedException(exc.QuotaException): api.create_cluster(api_base.SAMPLE_CLUSTER) self.assertEqual('Error', api.get_clusters()[0].status) @mock.patch('sahara.service.quotas.check_cluster', return_value=None) @mock.patch('sahara.service.quotas.check_scaling', return_value=None) def test_scale_cluster_success(self, check_scaling, check_cluster): cluster = api.create_cluster(api_base.SAMPLE_CLUSTER) cluster = api.get_cluster(cluster.id) api.scale_cluster(cluster.id, api_base.SCALE_DATA) result_cluster = api.get_cluster(cluster.id) self.assertEqual('Scaled', result_cluster.status) expected_count = { 'ng_1': 3, 'ng_2': 2, 'ng_3': 1, 'ng_4': 1, } ng_count = 0 for ng in result_cluster.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(4, ng_count) api.terminate_cluster(result_cluster.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'get_open_ports', 'get_open_ports', 'recommend_configs', 'validate_scaling', 'ops.provision_scaled_cluster', 'ops.terminate_cluster'], self.calls_order) @mock.patch('sahara.service.quotas.check_cluster', return_value=None) @mock.patch('sahara.service.quotas.check_scaling', return_value=None) def test_scale_cluster_n_specific_instances_success(self, check_scaling, check_cluster): cluster = api.create_cluster(api_base.SAMPLE_CLUSTER) cluster = api.get_cluster(cluster.id) api.scale_cluster(cluster.id, api_base.SCALE_DATA_N_SPECIFIC_INSTANCE) result_cluster = api.get_cluster(cluster.id) self.assertEqual('Scaled', result_cluster.status) expected_count = { 'ng_1': 3, 'ng_2': 1, 'ng_3': 1, } ng_count = 0 for ng in result_cluster.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(1, result_cluster.node_groups[1].count) self.assertNotIn('ng_2_0', self._get_instances_ids( result_cluster.node_groups[1])) self.assertNotIn('ng_2_2', self._get_instances_ids( result_cluster.node_groups[1])) self.assertEqual(3, ng_count) api.terminate_cluster(result_cluster.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'get_open_ports', 'recommend_configs', 'validate_scaling', 'ops.provision_scaled_cluster', 'ops.terminate_cluster'], self.calls_order) @mock.patch('sahara.service.quotas.check_cluster', return_value=None) @mock.patch('sahara.service.quotas.check_scaling', return_value=None) def test_scale_cluster_specific_and_non_specific(self, check_scaling, check_cluster): cluster = api.create_cluster(api_base.SAMPLE_CLUSTER) cluster = api.get_cluster(cluster.id) api.scale_cluster(cluster.id, api_base.SCALE_DATA_SPECIFIC_INSTANCE) result_cluster = api.get_cluster(cluster.id) self.assertEqual('Scaled', result_cluster.status) expected_count = { 'ng_1': 3, 'ng_2': 1, 'ng_3': 1, } ng_count = 0 for ng in result_cluster.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(1, result_cluster.node_groups[1].count) self.assertNotIn('ng_2_0', self._get_instances_ids( result_cluster.node_groups[1])) self.assertEqual(3, ng_count) api.terminate_cluster(result_cluster.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'get_open_ports', 'recommend_configs', 'validate_scaling', 'ops.provision_scaled_cluster', 'ops.terminate_cluster'], self.calls_order) def _get_instances_ids(self, node_group): instance_ids = [] for instance in node_group.instances: instance_ids.append(instance.instance_id) return instance_ids @mock.patch('sahara.service.quotas.check_cluster', return_value=None) @mock.patch('sahara.service.quotas.check_scaling', return_value=None) def test_scale_cluster_failed(self, check_scaling, check_cluster): cluster = api.create_cluster(api_base.SAMPLE_CLUSTER) check_scaling.side_effect = exc.QuotaException( 'resource', 'requested', 'available') with testtools.ExpectedException(exc.QuotaException): api.scale_cluster(cluster.id, {}) def test_cluster_update(self): with mock.patch('sahara.service.quotas.check_cluster'): cluster = api.create_cluster(api_base.SAMPLE_CLUSTER) updated_cluster = api.update_cluster( cluster.id, {'description': 'Cluster'}) self.assertEqual('Cluster', updated_cluster.description) def test_cluster_keypair_update(self): with mock.patch('sahara.service.quotas.check_cluster'): cluster = api.create_cluster(api_base.SAMPLE_CLUSTER) api.update_cluster(cluster.id, {'update_keypair': True}) sahara-12.0.0/sahara/tests/unit/service/api/v2/test_plugins.py0000664000175000017500000000541713656752032024305 0ustar zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import base as pl_base from sahara.plugins import provisioning as pr_base from sahara.service.api.v2 import plugins as api from sahara.tests.unit import base import sahara.tests.unit.service.api.v2.base as api_base class TestPluginApi(base.SaharaWithDbTestCase): def setUp(self): super(TestPluginApi, self).setUp() self.calls_order = [] self.override_config('plugins', ['fake']) pl_base.PLUGINS = api_base.FakePluginManager(self.calls_order) def test_get_plugin(self): # processing to dict data = api.get_plugin('fake', '0.1').dict self.assertIsNotNone(data) self.assertEqual( len(pr_base.list_of_common_configs()), len(data.get('configs'))) self.assertEqual(['fake', '0.1'], data.get('required_image_tags')) self.assertEqual( {'HDFS': ['namenode', 'datanode']}, data.get('node_processes')) self.assertIsNone(api.get_plugin('fake', '0.3')) data = api.get_plugin('fake').dict self.assertIsNotNone(data.get('version_labels')) self.assertIsNotNone(data.get('plugin_labels')) del data['plugin_labels'] del data['version_labels'] self.assertEqual({ 'description': "Some description", 'name': 'fake', 'title': 'Fake plugin', 'versions': ['0.1', '0.2']}, data) self.assertIsNone(api.get_plugin('name1', '0.1')) def test_update_plugin(self): data = api.get_plugin('fake', '0.1').dict self.assertIsNotNone(data) updated = api.update_plugin('fake', values={ 'plugin_labels': {'enabled': {'status': False}}}).dict self.assertFalse(updated['plugin_labels']['enabled']['status']) updated = api.update_plugin('fake', values={ 'plugin_labels': {'enabled': {'status': True}}}).dict self.assertTrue(updated['plugin_labels']['enabled']['status']) # restore to original status updated = api.update_plugin('fake', values={ 'plugin_labels': data['plugin_labels']}).dict self.assertEqual(data['plugin_labels']['enabled']['status'], updated['plugin_labels']['enabled']['status']) sahara-12.0.0/sahara/tests/unit/service/api/v2/base.py0000664000175000017500000000732713656752032022501 0ustar zuulzuul00000000000000# Copyright (c) 2017 EasyStack Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.plugins import base as pl_base from sahara.plugins import provisioning as pr_base SAMPLE_CLUSTER = { 'plugin_name': 'fake', 'hadoop_version': 'test_version', 'tenant_id': 'tenant_1', 'name': 'test_cluster', 'user_keypair_id': 'my_keypair', 'node_groups': [ { 'auto_security_group': True, 'name': 'ng_1', 'flavor_id': '42', 'node_processes': ['p1', 'p2'], 'count': 1 }, { 'auto_security_group': False, 'name': 'ng_2', 'flavor_id': '42', 'node_processes': ['p3', 'p4'], 'count': 3 }, { 'auto_security_group': False, 'name': 'ng_3', 'flavor_id': '42', 'node_processes': ['p3', 'p4'], 'count': 1 } ], 'cluster_configs': { 'service_1': { 'config_2': 'value_2' }, 'service_2': { 'config_1': 'value_1' } }, } SCALE_DATA = { 'resize_node_groups': [ { 'name': 'ng_1', 'count': 3, }, { 'name': 'ng_2', 'count': 2, } ], 'add_node_groups': [ { 'auto_security_group': True, 'name': 'ng_4', 'flavor_id': '42', 'node_processes': ['p1', 'p2'], 'count': 1 }, ] } SCALE_DATA_SPECIFIC_INSTANCE = { 'resize_node_groups': [ { 'name': 'ng_1', 'count': 3, }, { 'name': 'ng_2', 'count': 1, 'instances': ['ng_2_0'] } ], 'add_node_groups': [] } SCALE_DATA_N_SPECIFIC_INSTANCE = { 'resize_node_groups': [ { 'name': 'ng_1', 'count': 3, }, { 'name': 'ng_2', 'count': 1, 'instances': ['ng_2_0', 'ng_2_2'] } ], 'add_node_groups': [] } class FakePlugin(pr_base.ProvisioningPluginBase): _info = {} name = "fake" def __init__(self, calls_order): self.calls_order = calls_order def configure_cluster(self, cluster): pass def start_cluster(self, cluster): pass def get_description(self): return "Some description" def get_title(self): return "Fake plugin" def validate(self, cluster): self.calls_order.append('validate') def get_open_ports(self, node_group): self.calls_order.append('get_open_ports') def validate_scaling(self, cluster, to_be_enlarged, additional): self.calls_order.append('validate_scaling') def get_versions(self): return ['0.1', '0.2'] def get_node_processes(self, version): return {'HDFS': ['namenode', 'datanode']} def get_configs(self, version): return [] def recommend_configs(self, cluster, scaling=False): self.calls_order.append('recommend_configs') class FakePluginManager(pl_base.PluginManager): def __init__(self, calls_order): super(FakePluginManager, self).__init__() self.plugins['fake'] = FakePlugin(calls_order) sahara-12.0.0/sahara/tests/unit/service/api/test_v10.py0000664000175000017500000002656313656752032022710 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import oslo_messaging import six import testtools from sahara import conductor as cond from sahara import context from sahara import exceptions as exc from sahara.plugins import base as pl_base from sahara.plugins import provisioning as pr_base from sahara.service import api as service_api from sahara.service.api import v10 as api from sahara.tests.unit import base from sahara.utils import cluster as c_u conductor = cond.API SAMPLE_CLUSTER = { 'plugin_name': 'fake', 'hadoop_version': 'test_version', 'tenant_id': 'tenant_1', 'name': 'test_cluster', 'user_keypair_id': 'my_keypair', 'node_groups': [ { 'auto_security_group': True, 'name': 'ng_1', 'flavor_id': '42', 'node_processes': ['p1', 'p2'], 'count': 1 }, { 'auto_security_group': False, 'name': 'ng_2', 'flavor_id': '42', 'node_processes': ['p3', 'p4'], 'count': 3 }, { 'auto_security_group': False, 'name': 'ng_3', 'flavor_id': '42', 'node_processes': ['p3', 'p4'], 'count': 1 } ], 'cluster_configs': { 'service_1': { 'config_2': 'value_2' }, 'service_2': { 'config_1': 'value_1' } }, } SCALE_DATA = { 'resize_node_groups': [ { 'name': 'ng_1', 'count': 3, }, { 'name': 'ng_2', 'count': 2, } ], 'add_node_groups': [ { 'auto_security_group': True, 'name': 'ng_4', 'flavor_id': '42', 'node_processes': ['p1', 'p2'], 'count': 1 }, ] } class FakePlugin(pr_base.ProvisioningPluginBase): _info = {} name = "fake" def __init__(self, calls_order): self.calls_order = calls_order def configure_cluster(self, cluster): pass def start_cluster(self, cluster): pass def get_description(self): return "Some description" def get_title(self): return "Fake plugin" def validate(self, cluster): self.calls_order.append('validate') def get_open_ports(self, node_group): self.calls_order.append('get_open_ports') def validate_scaling(self, cluster, to_be_enlarged, additional): self.calls_order.append('validate_scaling') def get_versions(self): return ['0.1', '0.2'] def get_node_processes(self, version): return {'HDFS': ['namenode', 'datanode']} def get_configs(self, version): return [] def recommend_configs(self, cluster, scaling=False): self.calls_order.append('recommend_configs') class FakePluginManager(pl_base.PluginManager): def __init__(self, calls_order): super(FakePluginManager, self).__init__() self.plugins['fake'] = FakePlugin(calls_order) class FakeOps(object): def __init__(self, calls_order): self.calls_order = calls_order def provision_cluster(self, id): self.calls_order.append('ops.provision_cluster') conductor.cluster_update( context.ctx(), id, {'status': c_u.CLUSTER_STATUS_ACTIVE}) def provision_scaled_cluster(self, id, to_be_enlarged): self.calls_order.append('ops.provision_scaled_cluster') # Set scaled to see difference between active and scaled for (ng, count) in six.iteritems(to_be_enlarged): conductor.node_group_update(context.ctx(), ng, {'count': count}) conductor.cluster_update(context.ctx(), id, {'status': 'Scaled'}) def terminate_cluster(self, id): self.calls_order.append('ops.terminate_cluster') class TestApi(base.SaharaWithDbTestCase): def setUp(self): super(TestApi, self).setUp() self.calls_order = [] self.override_config('plugins', ['fake']) pl_base.PLUGINS = FakePluginManager(self.calls_order) service_api.setup_api(FakeOps(self.calls_order)) oslo_messaging.notify.notifier.Notifier.info = mock.Mock() self.ctx = context.ctx() @mock.patch('sahara.service.quotas.check_cluster', return_value=None) def test_create_cluster_success(self, check_cluster): cluster = api.create_cluster(SAMPLE_CLUSTER) self.assertEqual(1, check_cluster.call_count) result_cluster = api.get_cluster(cluster.id) self.assertEqual(c_u.CLUSTER_STATUS_ACTIVE, result_cluster.status) expected_count = { 'ng_1': 1, 'ng_2': 3, 'ng_3': 1, } ng_count = 0 for ng in result_cluster.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(3, ng_count) api.terminate_cluster(result_cluster.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'ops.terminate_cluster'], self.calls_order) @mock.patch('sahara.service.quotas.check_cluster', return_value=None) def test_create_multiple_clusters_success(self, check_cluster): MULTIPLE_CLUSTERS = SAMPLE_CLUSTER.copy() MULTIPLE_CLUSTERS['count'] = 2 clusters = api.create_multiple_clusters(MULTIPLE_CLUSTERS) self.assertEqual(2, check_cluster.call_count) result_cluster1 = api.get_cluster(clusters['clusters'][0]) result_cluster2 = api.get_cluster(clusters['clusters'][1]) self.assertEqual(c_u.CLUSTER_STATUS_ACTIVE, result_cluster1.status) self.assertEqual(c_u.CLUSTER_STATUS_ACTIVE, result_cluster2.status) expected_count = { 'ng_1': 1, 'ng_2': 3, 'ng_3': 1, } ng_count = 0 for ng in result_cluster1.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(3, ng_count) api.terminate_cluster(result_cluster1.id) api.terminate_cluster(result_cluster2.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'ops.terminate_cluster', 'ops.terminate_cluster'], self.calls_order) @mock.patch('sahara.service.quotas.check_cluster') def test_create_multiple_clusters_failed(self, check_cluster): MULTIPLE_CLUSTERS = SAMPLE_CLUSTER.copy() MULTIPLE_CLUSTERS['count'] = 2 check_cluster.side_effect = exc.QuotaException( 'resource', 'requested', 'available') with testtools.ExpectedException(exc.QuotaException): api.create_cluster(SAMPLE_CLUSTER) self.assertEqual('Error', api.get_clusters()[0].status) @mock.patch('sahara.service.quotas.check_cluster') def test_create_cluster_failed(self, check_cluster): check_cluster.side_effect = exc.QuotaException( 'resource', 'requested', 'available') with testtools.ExpectedException(exc.QuotaException): api.create_cluster(SAMPLE_CLUSTER) self.assertEqual('Error', api.get_clusters()[0].status) @mock.patch('sahara.service.quotas.check_cluster', return_value=None) @mock.patch('sahara.service.quotas.check_scaling', return_value=None) def test_scale_cluster_success(self, check_scaling, check_cluster): cluster = api.create_cluster(SAMPLE_CLUSTER) api.scale_cluster(cluster.id, SCALE_DATA) result_cluster = api.get_cluster(cluster.id) self.assertEqual('Scaled', result_cluster.status) expected_count = { 'ng_1': 3, 'ng_2': 2, 'ng_3': 1, 'ng_4': 1, } ng_count = 0 for ng in result_cluster.node_groups: self.assertEqual(expected_count[ng.name], ng.count) ng_count += 1 self.assertEqual(4, ng_count) api.terminate_cluster(result_cluster.id) self.assertEqual( ['get_open_ports', 'recommend_configs', 'validate', 'ops.provision_cluster', 'get_open_ports', 'get_open_ports', 'recommend_configs', 'validate_scaling', 'ops.provision_scaled_cluster', 'ops.terminate_cluster'], self.calls_order) @mock.patch('sahara.service.quotas.check_cluster', return_value=None) @mock.patch('sahara.service.quotas.check_scaling', return_value=None) def test_scale_cluster_failed(self, check_scaling, check_cluster): cluster = api.create_cluster(SAMPLE_CLUSTER) check_scaling.side_effect = exc.QuotaException( 'resource', 'requested', 'available') with testtools.ExpectedException(exc.QuotaException): api.scale_cluster(cluster.id, {}) def test_cluster_update(self): with mock.patch('sahara.service.quotas.check_cluster'): cluster = api.create_cluster(SAMPLE_CLUSTER) updated_cluster = api.update_cluster( cluster.id, {'description': 'Cluster'}) self.assertEqual('Cluster', updated_cluster.description) def test_get_plugin(self): # processing to dict data = api.get_plugin('fake', '0.1').dict self.assertIsNotNone(data) self.assertEqual( len(pr_base.list_of_common_configs()), len(data.get('configs'))) self.assertEqual(['fake', '0.1'], data.get('required_image_tags')) self.assertEqual( {'HDFS': ['namenode', 'datanode']}, data.get('node_processes')) self.assertIsNone(api.get_plugin('fake', '0.3')) data = api.get_plugin('fake').dict self.assertIsNotNone(data.get('version_labels')) self.assertIsNotNone(data.get('plugin_labels')) del data['plugin_labels'] del data['version_labels'] self.assertEqual({ 'description': "Some description", 'name': 'fake', 'title': 'Fake plugin', 'versions': ['0.1', '0.2']}, data) self.assertIsNone(api.get_plugin('name1', '0.1')) def test_update_plugin(self): data = api.get_plugin('fake', '0.1').dict self.assertIsNotNone(data) updated = api.update_plugin('fake', values={ 'plugin_labels': {'enabled': {'status': False}}}).dict self.assertFalse(updated['plugin_labels']['enabled']['status']) updated = api.update_plugin('fake', values={ 'plugin_labels': {'enabled': {'status': True}}}).dict self.assertTrue(updated['plugin_labels']['enabled']['status']) # restore to original status updated = api.update_plugin('fake', values={ 'plugin_labels': data['plugin_labels']}).dict self.assertEqual(data['plugin_labels']['enabled']['status'], updated['plugin_labels']['enabled']['status']) sahara-12.0.0/sahara/tests/unit/service/test_trusts.py0000664000175000017500000001356713656752032023075 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service import trusts from sahara.tests.unit import base class FakeTrust(object): def __init__(self, id): self.id = id class TestTrusts(base.SaharaTestCase): def _client(self): create = mock.Mock() create.return_value = FakeTrust("trust_id") client_trusts = mock.Mock(create=create) client = mock.Mock(trusts=client_trusts) return client @mock.patch('sahara.utils.openstack.keystone.client_from_auth') @mock.patch('sahara.utils.openstack.keystone.project_id_from_auth') @mock.patch('sahara.utils.openstack.keystone.user_id_from_auth') def test_create_trust(self, user_id_from_auth, project_id_from_auth, client_from_auth): project_id_from_auth.return_value = 'tenant_id' user_id_from_auth.side_effect = ['trustor_id', 'trustee_id'] trustor = 'trustor_id' trustee = 'trustee_id' client = self._client() client_from_auth.return_value = client trust_id = trusts.create_trust(trustor, trustee, "role_names") client.trusts.create.assert_called_with( trustor_user="trustor_id", trustee_user="trustee_id", impersonation=True, role_names="role_names", project="tenant_id", allow_redelegation=False, ) self.assertEqual("trust_id", trust_id) user_id_from_auth.side_effect = ['trustor_id', 'trustee_id'] client = self._client() client_from_auth.return_value = client trust_id = trusts.create_trust(trustor, trustee, "role_names", project_id='injected_project') client.trusts.create.assert_called_with(trustor_user="trustor_id", trustee_user="trustee_id", impersonation=True, role_names="role_names", project="injected_project", allow_redelegation=False) self.assertEqual("trust_id", trust_id) @mock.patch('sahara.conductor.API.cluster_get') @mock.patch('sahara.conductor.API.cluster_update') @mock.patch('sahara.service.trusts.create_trust') @mock.patch('sahara.utils.openstack.keystone.auth_for_admin') @mock.patch('sahara.context.current') def test_create_trust_for_cluster(self, context_current, auth_for_admin, create_trust, cluster_update, cl_get): self.override_config('project_name', 'admin_project', group='trustee') trustor_auth = mock.Mock() fake_cluster = mock.Mock(trust_id=None) cl_get.return_value = fake_cluster ctx = mock.Mock(roles="role_names", auth_plugin=trustor_auth) context_current.return_value = ctx trustee_auth = mock.Mock() auth_for_admin.return_value = trustee_auth create_trust.return_value = 'trust_id' trusts.create_trust_for_cluster("cluster") auth_for_admin.assert_called_with(project_name='admin_project') create_trust.assert_called_with(trustor=trustor_auth, trustee=trustee_auth, role_names='role_names', allow_redelegation=True) cluster_update.assert_called_with(ctx, fake_cluster, {"trust_id": "trust_id"}) @mock.patch('sahara.utils.openstack.keystone.client_from_auth') @mock.patch('sahara.utils.openstack.keystone.auth_for_admin') @mock.patch('sahara.service.trusts.create_trust') def test_delete_trust(self, trust, auth_for_admin, client_from_auth): client = self._client() client_from_auth.return_value = client trust.return_value = 'test_id' trustor_auth = mock.Mock() trustee_auth = mock.Mock() auth_for_admin.return_value = trustee_auth trust_id = trusts.create_trust(trustor_auth, trustee_auth, "role_names") trusts.delete_trust(trustee_auth, trust_id) client.trusts.delete.assert_called_with(trust_id) @mock.patch('sahara.conductor.API.cluster_update') @mock.patch('sahara.utils.openstack.keystone.auth_for_admin') @mock.patch('sahara.service.trusts.delete_trust') @mock.patch('sahara.conductor.API.cluster_get') @mock.patch('sahara.context.current') def test_delete_trust_from_cluster(self, context_current, cl_get, delete_trust, auth_for_admin, cluster_update): fake_cluster = mock.Mock(trust_id='test_id') cl_get.return_value = fake_cluster trustor_auth = mock.Mock() trustee_auth = mock.Mock() auth_for_admin.return_value = trustee_auth ctx = mock.Mock(roles="role_names", auth_plugin=trustor_auth) context_current.return_value = ctx trusts.delete_trust_from_cluster("cluster") delete_trust.assert_called_with(trustee_auth, 'test_id') cluster_update.assert_called_with(ctx, fake_cluster, {"trust_id": None}) sahara-12.0.0/sahara/tests/unit/service/test_ntp_service.py0000664000175000017500000000734413656752032024046 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service import ntp_service as ntp from sahara.tests.unit import base as test_base class FakeRemote(object): def __init__(self, effects): self.effects = effects self.idx = 0 def __enter__(self): return self def __exit__(self, *args): # validate number of executions if self.idx != len(self.effects): raise ValueError() def _get_effect(self): self.idx += 1 return self.effects[self.idx - 1] def execute_command(self, cmd, run_as_root=False): effect = self._get_effect() if isinstance(effect, RuntimeError): raise effect return 0, effect def append_to_file(self, file, text, run_as_root=False): return self.execute_command(file, run_as_root) def prepend_to_file(self, file, text, run_as_root=False): return self.execute_command(file, run_as_root) def get_os_distrib(self): return self.execute_command('get_os_distrib') class FakeInstance(object): def __init__(self, effects, id): self.id = id self.instance_name = id self.instance_id = id self.effects = effects def remote(self): return FakeRemote(self.effects) class NTPServiceTest(test_base.SaharaTestCase): @mock.patch('sahara.service.ntp_service.LOG.warning') @mock.patch('sahara.service.ntp_service.conductor.cluster_get') def test_configuring_ntp_unable_to_configure(self, cl_get, logger): instance = FakeInstance(["ubuntu", RuntimeError()], "1") ng = mock.Mock(instances=[instance]) cl_get.return_value = mock.Mock( node_groups=[ng], cluster_configs={}) ntp.configure_ntp('1') self.assertEqual( [mock.call("Unable to configure NTP service")], logger.call_args_list) @mock.patch('sahara.service.ntp_service.LOG.info') @mock.patch('sahara.service.ntp_service.conductor.cluster_get') def test_configuring_success(self, cl_get, logger): instance = FakeInstance( ['centos', "cat", "batman", "vs", "superman", "boom"], "1") ng = mock.Mock(instances=[instance]) cl_get.return_value = mock.Mock(node_groups=[ng], cluster_configs={}) ntp.configure_ntp('1') self.assertEqual([mock.call("NTP successfully configured")], logger.call_args_list) def test_retrieve_url(self): cl = mock.Mock( cluster_configs={'general': {"URL of NTP server": "batman.org"}}) self.assertEqual("batman.org", ntp.retrieve_ntp_server_url(cl)) self.override_config('default_ntp_server', "superman.org") cl = mock.Mock(cluster_configs={'general': {}}) self.assertEqual("superman.org", ntp.retrieve_ntp_server_url(cl)) @mock.patch('sahara.service.ntp_service.conductor.cluster_get') @mock.patch('sahara.service.ntp_service.retrieve_ntp_server_url') def test_is_ntp_enabled(self, ntp_url, cl_get): cl = mock.Mock( cluster_configs={'general': {"Enable NTP service": False}}) cl_get.return_value = cl ntp.configure_ntp('1') self.assertEqual(0, ntp_url.call_count) sahara-12.0.0/sahara/tests/unit/service/test_coordinator.py0000664000175000017500000000667713656752032024060 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service import coordinator from sahara.tests.unit import base class TestCoordinator(base.SaharaTestCase): def test_coord_without_backend(self): coord = coordinator.Coordinator('') self.assertIsNone(coord.coordinator) @mock.patch('tooz.coordination.get_coordinator') def test_coord_with_backend(self, get_coord): MockCoord = mock.Mock() MockCoord.start.return_value = mock.Mock() get_coord.return_value = MockCoord coord = coordinator.Coordinator('kazoo://1.2.3.4:2181') self.assertEqual(MockCoord, coord.coordinator) MockCoord.start.assert_called_once_with() class TestHashRing(base.SaharaTestCase): def setUp(self): super(TestHashRing, self).setUp() self.override_config('hash_ring_replicas_count', 1) @mock.patch('tooz.coordination.get_coordinator', return_value=mock.Mock()) def _init_hr(self, get_coord): self.hr = coordinator.HashRing('kazoo://1.2.3.4:2181', 'group') self.hr.get_members = mock.Mock(return_value=['id1', 'id2', 'id3']) self.hr.member_id = 'id2' self.hr._hash = mock.Mock(side_effect=[1, 10, 20, 5, 13, 25]) def test_get_subset_without_backend(self): hr = coordinator.HashRing('', 'group') objects = [mock.Mock(id=1), mock.Mock(id=2)] # all objects will be managed by this engine if coordinator backend # is not provided self.assertEqual(objects, hr.get_subset(objects)) def test_build_ring(self): # check hash ring with one replica self._init_hr() hr, keys = self.hr._build_ring() self.assertEqual({1: 'id1', 10: 'id2', 20: 'id3'}, hr) self.assertEqual([1, 10, 20], keys) # check hash ring with two replicas self.override_config('hash_ring_replicas_count', 2) self._init_hr() hr, keys = self.hr._build_ring() self.assertEqual( {1: 'id1', 5: 'id2', 10: 'id1', 13: 'id3', 20: 'id2', 25: 'id3'}, hr) self.assertEqual([1, 5, 10, 13, 20, 25], keys) def test_check_object(self): self._init_hr() ring, keys = self.hr._build_ring() # this object will be managed by this engine self.assertTrue( self.hr._check_object(mock.Mock(id='123'), ring, keys)) # this object will not be managed by this engine self.assertFalse( self.hr._check_object(mock.Mock(id='321'), ring, keys)) # this object will not be managed by this engine self.assertFalse( self.hr._check_object(mock.Mock(id='213'), ring, keys)) def test_get_subset_with_backend(self): self._init_hr() objects = [mock.Mock(id=123), mock.Mock(id=321), mock.Mock(id=213)] # only first object will be managed by this engine self.assertEqual([objects[0]], self.hr.get_subset(objects)) sahara-12.0.0/sahara/tests/unit/service/test_quotas.py0000664000175000017500000002025213656752032023032 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils import testtools from sahara import exceptions as exc from sahara.service import quotas from sahara.tests.unit import base class FakeFlavor(object): def __init__(self, ram, vcpu): self.ram = ram self.vcpus = vcpu class FakeNovaClient(object): def __init__(self, lims): self.lims = lims @property def limits(self): return self def to_dict(self): return self.lims def get(self): return self @property def flavors(self): return {'id1': FakeFlavor(4, 2)} class CinderLimit(object): def __init__(self, name, value): self.name = name self.value = value class FakeCinderClient(object): def __init__(self, lims): self.lims = lims @property def limits(self): return self @property def absolute(self): return self.lims def get(self): return self def to_dict(self): return self class FakeNeutronClient(object): def __init__(self, lims): self.lims = lims def show_quota(self, tenant_id): return {'quota': self.lims} def list_floatingips(self, tenant_id): return {'floatingips': [1, 2, 3, 4, 5]} def list_security_groups(self, tenant_id): return {'security_groups': [1, 2, 3, 4, 5, 6, 7]} def list_security_group_rules(self, tenant_id): return {'security_groups_rules': [1, 2, 3]} def list_ports(self, tenant_id): return {'ports': []} class FakeCluster(object): def __init__(self, node_groups): self.node_groups = node_groups class FakeNodeGroup(object): def __init__(self, count, auto_sg, volumes_size, volumes_per_node, pool, flavor_id, ports): self.count = count self.auto_security_group = auto_sg self.volumes_size = volumes_size self.volumes_per_node = volumes_per_node self.floating_ip_pool = pool self.flavor_id = flavor_id self.open_ports = ports self.id = uuidutils.generate_uuid() nova_limits = { 'absolute': { 'maxTotalRAMSize': 10, 'totalRAMUsed': 1, 'maxTotalCores': 15, 'totalCoresUsed': 5, 'maxTotalInstances': 5, 'totalInstancesUsed': 2, 'maxTotalFloatingIps': 300, 'totalFloatingIpsUsed': 100, 'maxSecurityGroups': 50, 'totalSecurityGroupsUsed': 22, 'maxSecurityGroupRules': -1, # unlimited quota test } } neutron_limits = { 'floatingip': 2345, 'security_group': 1523, 'security_group_rule': 332, 'port': -1 } cinder_limits = [ CinderLimit(name='maxTotalVolumes', value=5), CinderLimit(name='totalVolumesUsed', value=3), CinderLimit(name='maxTotalVolumeGigabytes', value=10), CinderLimit(name='totalGigabytesUsed', value=2) ] class TestQuotas(base.SaharaTestCase): LIST_LIMITS = ['ram', 'cpu', 'instances', 'floatingips', 'security_groups', 'security_group_rules', 'ports', 'volumes', 'volume_gbs'] def test_get_zero_limits(self): res = quotas._get_zero_limits() self.assertEqual(9, len(res)) for key in self.LIST_LIMITS: self.assertEqual(0, res[key]) @mock.patch('sahara.service.quotas._get_avail_limits') def test_check_limits(self, mock_avail_limits): avail_limits = {} req_limits = {} for key in self.LIST_LIMITS: avail_limits[key] = quotas.UNLIMITED req_limits[key] = 100500 mock_avail_limits.return_value = avail_limits self.assertIsNone(quotas._check_limits(req_limits)) for key in self.LIST_LIMITS: avail_limits[key] = 2 req_limits[key] = 1 mock_avail_limits.return_value = avail_limits self.assertIsNone(quotas._check_limits(req_limits)) for key in self.LIST_LIMITS: req_limits[key] = 2 self.assertIsNone(quotas._check_limits(req_limits)) for key in self.LIST_LIMITS: req_limits[key] = 3 self.assertRaises(exc.QuotaException, quotas._check_limits, req_limits) @mock.patch('sahara.utils.openstack.nova.client') def test_update_limits_for_ng(self, nova_mock): flavor_mock = mock.Mock() type(flavor_mock).ram = mock.PropertyMock(return_value=4) type(flavor_mock).vcpus = mock.PropertyMock(return_value=2) flavor_get_mock = mock.Mock() flavor_get_mock.get.return_value = flavor_mock type(nova_mock.return_value).flavors = mock.PropertyMock( return_value=flavor_get_mock) ng = mock.Mock() type(ng).flavor_id = mock.PropertyMock(return_value=3) type(ng).floating_ip_pool = mock.PropertyMock(return_value='pool') type(ng).volumes_per_node = mock.PropertyMock(return_value=4) type(ng).volumes_size = mock.PropertyMock(return_value=5) type(ng).auto_security_group = mock.PropertyMock(return_value=True) type(ng).open_ports = mock.PropertyMock(return_value=[1111, 2222]) limits = quotas._get_zero_limits() quotas._update_limits_for_ng(limits, ng, 3) self.assertEqual(3, limits['instances']) self.assertEqual(12, limits['ram']) self.assertEqual(6, limits['cpu']) self.assertEqual(3, limits['floatingips']) self.assertEqual(12, limits['volumes']) self.assertEqual(60, limits['volume_gbs']) self.assertEqual(1, limits['security_groups']) self.assertEqual(5, limits['security_group_rules']) self.assertEqual(3, limits['ports']) @mock.patch('sahara.utils.openstack.nova.client', return_value=FakeNovaClient(nova_limits)) def test_get_nova_limits(self, nova): self.assertEqual( {'cpu': 10, 'instances': 3, 'ram': 9}, quotas._get_nova_limits()) @mock.patch('sahara.utils.openstack.cinder.client', return_value=FakeCinderClient(cinder_limits)) def test_get_cinder_limits(self, cinder): self.assertEqual({'volumes': 2, 'volume_gbs': 8}, quotas._get_cinder_limits()) @mock.patch('sahara.utils.openstack.neutron.client', return_value=FakeNeutronClient(neutron_limits)) def test_neutron_limits(self, neutron): self.assertEqual({'floatingips': 2340, 'ports': 'unlimited', 'security_group_rules': 332, 'security_groups': 1516}, quotas._get_neutron_limits()) @mock.patch("sahara.utils.openstack.cinder.check_cinder_exists", return_value=True) @mock.patch('sahara.utils.openstack.nova.client', return_value=FakeNovaClient(nova_limits)) @mock.patch('sahara.utils.openstack.cinder.client', return_value=FakeCinderClient(cinder_limits)) @mock.patch('sahara.utils.openstack.neutron.client', return_value=FakeNeutronClient(neutron_limits)) def test_limits_for_cluster(self, p1, p2, p3, p4): ng = [FakeNodeGroup(1, False, 0, 0, None, 'id1', [1, 2, 3])] quotas.check_cluster(FakeCluster(ng)) with testtools.ExpectedException(exc.QuotaException): quotas.check_cluster(FakeCluster([FakeNodeGroup( 1, False, 3, 3, None, 'id1', [1, 2, 3])])) ng = [FakeNodeGroup(1, False, 0, 0, None, 'id1', [1, 2, 3]), FakeNodeGroup(0, False, 0, 0, None, 'id1', [1, 2, 3])] quotas.check_scaling(FakeCluster(ng), {}, {ng[1].id: 1}) with testtools.ExpectedException(exc.QuotaException): quotas.check_scaling(FakeCluster(ng), {}, {ng[1].id: 3}) sahara-12.0.0/sahara/tests/unit/service/edp/0000775000175000017500000000000013656752227020662 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/0000775000175000017500000000000013656752227023310 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/__init__.py0000664000175000017500000000000013656752032025401 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/s3/0000775000175000017500000000000013656752227023635 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/s3/__init__.py0000664000175000017500000000000013656752032025726 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/s3/test_s3_type.py0000664000175000017500000000475313656752032026637 0ustar zuulzuul00000000000000# Copyright (c) 2017 Massachusetts Open Cloud # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock from sahara import exceptions as ex from sahara.service.edp.job_binaries.s3.implementation import S3Type from sahara.tests.unit import base class TestS3Type(base.SaharaTestCase): def setUp(self): super(TestS3Type, self).setUp() self.i_s = S3Type() @mock.patch('sahara.service.edp.job_binaries.s3.implementation.S3Type.' 'get_raw_data') def test_copy_binary_to_cluster(self, get_raw_data): remote = mock.Mock() job_binary = mock.Mock() job_binary.name = 'test' job_binary.url = 's3://somebinary' get_raw_data.return_value = 'test' res = self.i_s.copy_binary_to_cluster(job_binary, remote=remote) self.assertEqual('/tmp/test', res) remote.write_file_to.assert_called_with( '/tmp/test', 'test') def test_validate_job_location_format(self): self.assertTrue( self.i_s.validate_job_location_format("s3://temp/temp")) self.assertFalse( self.i_s.validate_job_location_format("s4://temp/temp")) self.assertFalse(self.i_s.validate_job_location_format("s3:///")) def test_validate(self): data = {"extra": {}, "url": "s3://temp/temp"} with testtools.ExpectedException(ex.InvalidDataException): self.i_s.validate(data) data["extra"] = {"accesskey": "a", "secretkey": "s", "endpoint": "e"} self.i_s.validate(data) data["extra"].pop("accesskey") with testtools.ExpectedException(ex.InvalidDataException): self.i_s.validate(data) @mock.patch('sahara.service.edp.s3_common.get_raw_job_binary_data') def test_get_raw_data(self, s3_get_raw_jbd): self.i_s.get_raw_data('a job binary') self.assertEqual(1, s3_get_raw_jbd.call_count) sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/swift/0000775000175000017500000000000013656752227024444 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/swift/__init__.py0000664000175000017500000000000013656752032026535 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/swift/test_swift_type.py0000664000175000017500000001537513656752032030257 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock import sahara.exceptions as ex from sahara.service.castellan import config as castellan from sahara.service.edp.job_binaries.swift.implementation import SwiftType from sahara.tests.unit import base class TestSwiftType(base.SaharaTestCase): def setUp(self): super(TestSwiftType, self).setUp() castellan.validate_config() self.i_s = SwiftType() def test_validate_job_location_format(self): self.assertFalse(self.i_s.validate_job_location_format('')) self.assertFalse(self.i_s.validate_job_location_format('swift://')) self.assertTrue(self.i_s. validate_job_location_format('swift://123')) def test_validate(self): data = {} self.i_s.validate(data, job_binary_id='id') with testtools.ExpectedException(ex.BadJobBinaryException): self.i_s.validate(data) self.override_config('use_domain_for_proxy_users', True) self.i_s.validate(data) self.override_config('use_domain_for_proxy_users', False) data = { "extra": { "user": "user", "password": "pass" } } self.i_s.validate(data) @mock.patch('sahara.service.edp.job_binaries.' 'swift.implementation.SwiftType.get_raw_data') def test_copy_binary_to_cluster(self, get_raw_data): remote = mock.Mock() job_binary = mock.Mock() job_binary.name = 'test' job_binary.url = 'swift://somebinary' get_raw_data.return_value = 'test' res = self.i_s.copy_binary_to_cluster(job_binary, remote=remote) self.assertEqual('/tmp/test', res) remote.write_file_to.assert_called_with( '/tmp/test', 'test') def test__get_raw_data(self): client_instance = mock.Mock() client_instance.head_object = mock.Mock() client_instance.get_object = mock.Mock() job_binary = mock.Mock() job_binary.url = 'swift://container/object' # an object that is too large should raise an exception header = {'content-length': '2048'} client_instance.head_object.return_value = header self.override_config('job_binary_max_KB', 1) self.assertRaises(ex.DataTooBigException, self.i_s._get_raw_data, job_binary, client_instance) client_instance.head_object.assert_called_once_with('container', 'object') # valid return header = {'content-length': '4'} body = 'data' client_instance.head_object.return_value = header client_instance.get_object.return_value = (header, body) self.assertEqual(body, self.i_s._get_raw_data(job_binary, client_instance)) client_instance.get_object.assert_called_once_with('container', 'object') def test_validate_job_binary_url(self): job_binary = mock.Mock() # bad swift url should raise an exception job_binary.url = 'notswift://container/object' with testtools.ExpectedException(ex.BadJobBinaryException): self.i_s._validate_job_binary_url(job_binary) # specifying only a container should raise an exception job_binary.url = 'swift://container' with testtools.ExpectedException(ex.BadJobBinaryException): self.i_s._validate_job_binary_url(job_binary) job_binary.url = 'swift://container/path' self.i_s._validate_job_binary_url(job_binary) @mock.patch('sahara.service.edp.job_binaries.swift.implementation.' 'SwiftType._get_raw_data') @mock.patch('sahara.utils.openstack.swift.client') def test_get_raw_data(self, swift_client, get_raw_data): client_instance = mock.Mock() swift_client.return_value = client_instance job_binary = mock.Mock() job_binary.url = 'swift://container/object' # embedded credentials job_binary.extra = dict(user='test', password='secret') self.i_s.get_raw_data(job_binary) swift_client.assert_called_with(username='test', password='secret') get_raw_data.assert_called_with(job_binary, client_instance) # proxy configs should override embedded credentials proxy_configs = dict(proxy_username='proxytest', proxy_password='proxysecret', proxy_trust_id='proxytrust') self.i_s.get_raw_data(job_binary, proxy_configs=proxy_configs) swift_client.assert_called_with(username='proxytest', password='proxysecret', trust_id='proxytrust') get_raw_data.assert_called_with(job_binary, client_instance) @mock.patch('sahara.utils.openstack.base.url_for') @mock.patch('sahara.context.ctx') @mock.patch( 'sahara.service.edp.job_binaries.swift.' 'implementation.SwiftType._get_raw_data') @mock.patch('swiftclient.Connection') def test_get_raw_data_with_context(self, swift_client, _get_raw_data, ctx, url_for): client_instance = mock.Mock() swift_client.return_value = client_instance test_context = mock.Mock() test_context.auth_token = 'testtoken' test_context.auth_plugin = None ctx.return_value = test_context url_for.return_value = 'url_for' job_binary = mock.Mock() job_binary.url = 'swift://container/object' job_binary.extra = dict(user='test', password='secret') self.i_s.get_raw_data(job_binary, with_context=True) self.assertEqual([mock.call( auth_version='3', cacert=None, insecure=False, max_backoff=10, preauthtoken='testtoken', preauthurl='url_for', retries=5, retry_on_ratelimit=True, starting_backoff=10)], swift_client.call_args_list) _get_raw_data.assert_called_with(job_binary, client_instance) sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/internal_db/0000775000175000017500000000000013656752227025571 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/internal_db/__init__.py0000664000175000017500000000000013656752032027662 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/internal_db/test_internal_db_type.py0000664000175000017500000000712313656752032032521 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils import testtools from sahara import exceptions as ex from sahara.service.edp.job_binaries.internal_db import implementation class TestInternalDBType(testtools.TestCase): def setUp(self): super(TestInternalDBType, self).setUp() self.internal_db = implementation.InternalDBType() def test_validate_job_location_format(self): self.assertFalse(self.internal_db. validate_job_location_format('')) self.assertFalse(self.internal_db. validate_job_location_format('invalid-scheme://')) self.assertFalse(self.internal_db. validate_job_location_format('internal-db://abc')) self.assertTrue(self.internal_db. validate_job_location_format( 'internal-db://' + uuidutils.generate_uuid())) @mock.patch('sahara.conductor.API.job_binary_internal_get_raw_data') def test_copy_binary_to_cluster(self, conductor_get_raw_data): remote = mock.Mock() context = mock.Mock() conductor_get_raw_data.return_value = 'ok' job_binary = mock.Mock() job_binary.name = 'test' job_binary.url = 'internal-db://somebinary' res = self.internal_db.copy_binary_to_cluster(job_binary, context=context, remote=remote) self.assertEqual('/tmp/test', res) remote.write_file_to.assert_called_with( '/tmp/test', 'ok') @mock.patch('sahara.conductor.API.job_binary_internal_get_raw_data') def test_get_raw_data(self, conductor_get_raw_data): context = mock.Mock() conductor_get_raw_data.return_value = 'ok' job_binary = mock.Mock() job_binary.url = 'internal-db://somebinary' self.internal_db.get_raw_data(job_binary, context=context) @mock.patch('sahara.service.validations.edp.base.' 'check_job_binary_internal_exists') def test_data_validation(self, check_exists): data = { 'url': '', 'description': 'empty url' } with testtools.ExpectedException(ex.InvalidDataException): self.internal_db.validate(data) data = { 'url': 'invalid-url://', 'description': 'not empty, but invalid url' } with testtools.ExpectedException(ex.InvalidDataException): self.internal_db.validate(data) data = { 'url': 'internal-db://must-be-uuid', 'description': 'correct scheme, but not netloc is not uuid' } with testtools.ExpectedException(ex.InvalidDataException): self.internal_db.validate(data) data = { 'url': 'internal-db://' + uuidutils.generate_uuid(), 'description': 'correct scheme and netloc' } self.internal_db.validate(data) sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/test_base.py0000664000175000017500000000231613656752032025627 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service.edp.job_binaries import base as jb_base from sahara.tests.unit import base class _FakeJobBinary(jb_base.JobBinaryType): def copy_binary_to_cluster(self, job_binary, **kwargs): return 'valid path' class JobBinaryManagerSupportTest(base.SaharaTestCase): def setUp(self): super(JobBinaryManagerSupportTest, self).setUp() self.job_binary = _FakeJobBinary() def test_generate_valid_path(self): jb = mock.Mock() jb.name = 'jb_name.jar' res = self.job_binary._generate_valid_path(jb) self.assertEqual('/tmp/jb_name.jar', res) sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/manila/0000775000175000017500000000000013656752227024551 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/manila/__init__.py0000664000175000017500000000000013656752032026642 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/manila/test_manila_type.py0000664000175000017500000001376113656752032030466 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils import testtools import sahara.exceptions as ex from sahara.service.edp.job_binaries.manila import implementation from sahara.tests.unit import base class _FakeShare(object): def __init__(self, id, share_proto='NFS'): self.id = id self.share_proto = share_proto class TestManilaType(base.SaharaTestCase): def setUp(self): super(TestManilaType, self).setUp() self.manila_type = implementation.ManilaType() def test_validate_job_location_format(self): invalid_url_1 = 'manila://abc' invalid_url_2 = 'manila://' + uuidutils.generate_uuid() valid_url = 'manila://' + uuidutils.generate_uuid() + '/path' self.assertFalse(self.manila_type.validate_job_location_format('')) self.assertFalse(self.manila_type. validate_job_location_format(invalid_url_1)) self.assertFalse(self.manila_type. validate_job_location_format(invalid_url_2)) self.assertTrue(self.manila_type. validate_job_location_format(valid_url)) @mock.patch('sahara.service.edp.utils.shares.default_mount') @mock.patch('sahara.utils.openstack.manila.client') def test_copy_binary_to_cluster(self, f_manilaclient, default_mount): cluster_shares = [ {'id': 'the_share_id', 'path': '/mnt/mymountpoint'} ] ng_shares = [ {'id': 'the_share_id', 'path': '/mnt/othermountpoint'}, {'id': '123456', 'path': '/mnt/themountpoint'} ] job_binary = mock.Mock() job_binary.url = 'manila://the_share_id/the_path' remote = mock.Mock() remote.instance.node_group.cluster.shares = cluster_shares remote.instance.node_group.shares = ng_shares info = self.manila_type.copy_binary_to_cluster(job_binary, remote=remote) self.assertItemsEqual('/mnt/mymountpoint/the_path', info) job_binary.url = 'manila://123456/the_path' info = self.manila_type.copy_binary_to_cluster(job_binary, remote=remote) self.assertItemsEqual('/mnt/themountpoint/the_path', info) # missing id default_mount.return_value = '/mnt/missing_id' job_binary.url = 'manila://missing_id/the_path' info = self.manila_type.copy_binary_to_cluster(job_binary, remote=remote) self.assertItemsEqual('/mnt/missing_id/the_path', info) @mock.patch('sahara.utils.openstack.manila.client') @mock.patch('sahara.conductor.API.cluster_update') @mock.patch('sahara.service.edp.utils.shares.mount_shares') def test_prepare_cluster(self, mount_shares, cluster_update, f_manilaclient): cluster_shares = [ {'id': 'the_share_id', 'path': '/mnt/mymountpoint'} ] ng_shares = [ {'id': 'the_share_id', 'path': '/mnt/othermountpoint'}, {'id': '123456', 'path': '/mnt/themountpoint'} ] job_binary = mock.Mock() remote = mock.Mock() remote.instance.node_group.cluster.shares = cluster_shares remote.instance.node_group.shares = ng_shares # This should return a default path, and should cause # a mount at the default location share = _FakeShare("missing_id") f_manilaclient.return_value = mock.Mock(shares=mock.Mock( get=mock.Mock(return_value=share))) job_binary.url = 'manila://missing_id/the_path' self.manila_type.prepare_cluster(job_binary, remote=remote) self.assertEqual(1, mount_shares.call_count) self.assertEqual(1, cluster_update.call_count) def test_get_raw_data(self): with testtools.ExpectedException(ex.NotImplementedException): self.manila_type.get_raw_data({}) def test_data_validation(self): data = { "name": "test", "url": "man://%s" % uuidutils.generate_uuid(), "type": "manila", "description": ("incorrect url schema for") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) data = { "name": "test", "url": "", "type": "manila", "description": ("empty url") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) data = { "name": "test", "url": "manila://bob", "type": "manila", "description": ("netloc is not a uuid") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) data = { "name": "test", "url": "manila://%s" % uuidutils.generate_uuid(), "type": "manila", "description": ("netloc is not a uuid") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) data = { "name": "test", "url": "manila://%s/foo" % uuidutils.generate_uuid(), "type": "manila", "description": ("correct url") } self.manila_type.validate(data) sahara-12.0.0/sahara/tests/unit/service/edp/job_binaries/job_binary_manager_support.py0000664000175000017500000000476113656752032031270 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools import sahara.exceptions as ex from sahara.service.edp.job_binaries import manager as jb_manager from sahara.tests.unit import base class JobBinaryManagerSupportTest(base.SaharaTestCase): def setUp(self): super(JobBinaryManagerSupportTest, self).setUp() jb_manager.setup_job_binaries() def test_job_binaries_loaded(self): jb_types = [jb.name for jb in jb_manager.JOB_BINARIES.get_job_binaries()] self.assertIn('internal-db', jb_types) self.assertIn('manila', jb_types) self.assertIn('swift', jb_types) def test_get_job_binary_by_url(self): with testtools.ExpectedException(ex.InvalidDataException): jb_manager.JOB_BINARIES.get_job_binary_by_url('') with testtools.ExpectedException(ex.InvalidDataException): jb_manager.JOB_BINARIES.get_job_binary_by_url('internal-db') self.assertEqual('internal-db', jb_manager.JOB_BINARIES .get_job_binary_by_url('internal-db://').name) self.assertEqual('manila', jb_manager.JOB_BINARIES .get_job_binary_by_url('manila://').name) self.assertEqual('swift', jb_manager.JOB_BINARIES .get_job_binary_by_url('swift://').name) def test_get_job_binary(self): with testtools.ExpectedException(ex.InvalidDataException): jb_manager.JOB_BINARIES.get_job_binary('') with testtools.ExpectedException(ex.InvalidDataException): jb_manager.JOB_BINARIES.get_job_binary('internaldb') self.assertEqual('internal-db', jb_manager.JOB_BINARIES .get_job_binary('internal-db').name) self.assertEqual('manila', jb_manager.JOB_BINARIES .get_job_binary('manila').name) self.assertEqual('swift', jb_manager.JOB_BINARIES .get_job_binary('swift').name) sahara-12.0.0/sahara/tests/unit/service/edp/__init__.py0000664000175000017500000000000013656752032022753 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/storm/0000775000175000017500000000000013656752227022026 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/storm/__init__.py0000664000175000017500000000000013656752032024117 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/storm/test_storm.py0000664000175000017500000004064213656752032024603 0ustar zuulzuul00000000000000# Copyright (c) 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from unittest import mock import sahara.exceptions as ex from sahara.service.edp.job_utils import ds_manager from sahara.service.edp.storm import engine as se from sahara.service.edp.storm.engine import jb_manager from sahara.tests.unit import base from sahara.utils import edp class TestStorm(base.SaharaTestCase): def setUp(self): super(TestStorm, self).setUp() self.master_host = "master" self.master_inst = "6789" self.storm_topology_name = "MyJob_ed8347a9-39aa-477c-8108-066202eb6130" self.workflow_dir = "/wfdir" jb_manager.setup_job_binaries() ds_manager.setup_data_sources() def test_get_topology_and_inst_id(self): '''Test parsing of job ids Test that job ids of the form topology_name@instance are split into topology_name and instance ids by eng._get_topology_name_and_inst_id() but anything else returns empty strings ''' eng = se.StormJobEngine(None) for job_id in [None, "", "@", "something", "topology_name@", "@instance"]: topology_name, inst_id = eng._get_topology_and_inst_id(job_id) self.assertEqual(("", ""), (topology_name, inst_id)) topology_name, inst_id = eng._get_topology_and_inst_id( "topology_name@instance") self.assertEqual(("topology_name", "instance"), (topology_name, inst_id)) @mock.patch('sahara.utils.cluster.get_instances') def test_get_instance_if_running(self, get_instances): '''Test retrieval of topology_name and instance object for running job If the job id is valid and the job status is non-terminated, _get_instance_if_running() should retrieve the instance based on the inst_id and return the topology_name and instance. If the job is invalid or the job is terminated, it should return None, None. If get_instances() throws an exception or returns an empty list, the instance returned should be None (topology_name might still be set) ''' get_instances.return_value = ["instance"] job_exec = mock.Mock() eng = se.StormJobEngine("cluster") job_exec.engine_job_id = "invalid id" self.assertEqual((None, None), eng._get_instance_if_running(job_exec)) job_exec.engine_job_id = "topology_name@inst_id" for state in edp.JOB_STATUSES_TERMINATED: job_exec.info = {'status': state} self.assertEqual((None, None), eng._get_instance_if_running(job_exec)) job_exec.info = {'status': edp.JOB_STATUS_RUNNING} self.assertEqual(("topology_name", "instance"), eng._get_instance_if_running(job_exec)) get_instances.assert_called_with("cluster", ["inst_id"]) # Pretend get_instances returns nothing get_instances.return_value = [] topology_name, instance = eng._get_instance_if_running(job_exec) self.assertIsNone(instance) # Pretend get_instances throws an exception get_instances.side_effect = Exception("some failure") topology_name, instance = eng._get_instance_if_running(job_exec) self.assertIsNone(instance) @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('sahara.utils.cluster.get_instances') @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.context.ctx', return_value="ctx") def test_get_job_status_from_remote(self, get_instance, get_instances, get_remote, ctx, job_get): '''Test retrieval of job status from remote instance If the process is present, status is RUNNING If the process is not present, status depends on the result file If the result file is missing, status is DONEWITHERROR ''' eng = se.StormJobEngine("cluster") job_exec = mock.Mock() master_instance = self._make_master_instance() master_instance.execute_command.return_value = 0, "ACTIVE" get_remote.return_value.__enter__ = mock.Mock( return_value=master_instance) get_instance.return_value = master_instance get_instances.return_value = ["instance"] # Pretend process is running job_exec.engine_job_id = "topology_name@inst_id" job_exec.info = {'status': edp.JOB_STATUS_RUNNING} job_exec.job_configs = {"configs": {"topology_name": "topology_name"}} status = eng._get_job_status_from_remote(job_exec) self.assertEqual({"status": edp.JOB_STATUS_RUNNING}, status) @mock.patch.object(se.StormJobEngine, '_get_job_status_from_remote', autospec=True) @mock.patch.object(se.StormJobEngine, '_get_instance_if_running', autospec=True) @mock.patch('sahara.utils.remote.get_remote') def test_get_job_status(self, get_remote, _get_instance_if_running, _get_job_status_from_remote): # This is to mock "with remote.get_remote(instance) as r" remote_instance = mock.Mock() get_remote.return_value.__enter__ = mock.Mock( return_value=remote_instance) # Pretend instance is not returned _get_instance_if_running.return_value = "topology_name", None job_exec = mock.Mock() eng = se.StormJobEngine("cluster") status = eng.get_job_status(job_exec) self.assertIsNone(status) # Pretend we have an instance _get_instance_if_running.return_value = "topology_name", "instance" _get_job_status_from_remote.return_value = {"status": edp.JOB_STATUS_RUNNING} status = eng.get_job_status(job_exec) _get_job_status_from_remote.assert_called_with(eng, job_exec, 3) self.assertEqual({"status": edp.JOB_STATUS_RUNNING}, status) @mock.patch.object(se.StormJobEngine, '_get_instance_if_running', autospec=True, return_value=(None, None)) @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.context.ctx', return_value="ctx") def test_cancel_job_null_or_done(self, get_remote, _get_instance_if_running, job_get, ctx): '''Test cancel_job() when instance is None Test that cancel_job() returns None and does not try to retrieve a remote instance if _get_instance_if_running() returns None ''' eng = se.StormJobEngine("cluster") job_exec = mock.Mock() self.assertIsNone(eng.cancel_job(job_exec)) self.assertFalse(get_remote.called) @mock.patch.object(se.StormJobEngine, '_get_job_status_from_remote', autospec=True, return_value={"status": edp.JOB_STATUS_KILLED}) @mock.patch('sahara.utils.cluster.get_instances') @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('sahara.utils.remote.get_remote') def test_cancel_job(self, get_remote, get_instance, get_instances, _get_job_status_from_remote): master_instance = self._make_master_instance() status = self._setup_tests(master_instance) get_instance.return_value = master_instance get_instances.return_value = ["instance"] master_instance.execute_command.return_value = 0, "KILLED" get_remote.return_value.__enter__ = mock.Mock( return_value=master_instance) eng = se.StormJobEngine("cluster") job_exec = mock.Mock() job_exec.engine_job_id = "topology_name@inst_id" job_exec.info = {'status': edp.JOB_STATUS_RUNNING} job_exec.job_configs = {"configs": {"topology_name": "topology_name"}} status = eng.cancel_job(job_exec) master_instance.execute_command.assert_called_with( "/usr/local/storm/bin/storm kill -c nimbus.host=%s topology_name " "> /dev/null 2>&1 & echo $!" % self.master_host) self.assertEqual({"status": edp.JOB_STATUS_KILLED}, status) @mock.patch('sahara.service.edp.storm.engine.jb_manager') @mock.patch('sahara.utils.remote.get_remote') def test_upload_job_files(self, get_remote, jb_manager): main_names = ["main1", "main2", "main3"] lib_names = ["lib1", "lib2", "lib3"] def make_data_objects(*args): objs = [] for name in args: m = mock.Mock() m.name = name objs.append(m) return objs job = mock.Mock() job.id = "job_exec_id" job.mains = make_data_objects(*main_names) job.libs = make_data_objects(*lib_names) # This is to mock "with remote.get_remote(instance) as r" remote_instance = mock.Mock() get_remote.return_value.__enter__ = mock.Mock( return_value=remote_instance) remote_instance.instance.node_group.cluster.shares = [] remote_instance.instance.node_group.shares = [] JOB_BINARIES = mock.Mock() mock_jb = mock.Mock() jb_manager.JOB_BINARIES = JOB_BINARIES JOB_BINARIES.get_job_binary_by_url = mock.Mock(return_value=mock_jb) jbs = main_names + lib_names mock_jb.copy_binary_to_cluster = mock.Mock( side_effect=['/tmp/%s.%s' % (job.id, j) for j in jbs]) eng = se.StormJobEngine("cluster") eng._prepare_job_binaries = mock.Mock() paths = eng._upload_job_files("where", "/somedir", job, {}) self.assertEqual(['/tmp/%s.%s' % (job.id, j) for j in jbs], paths) def _make_master_instance(self, return_code=0): master = mock.Mock() master.execute_command.return_value = (return_code, self.storm_topology_name) master.get_python_version.return_value = 'python' master.hostname.return_value = self.master_host master.id = self.master_inst return master @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.context.ctx', return_value="ctx") def _setup_tests(self, master_instance, ctx, job_get, get_instance, get_remote, job_exec_get): # This is to mock "with remote.get_remote(master) as r" in run_job get_remote.return_value.__enter__ = mock.Mock( return_value=master_instance) get_instance.return_value = master_instance @mock.patch.object(se.StormJobEngine, '_generate_topology_name', autospec=True, return_value=( "MyJob_ed8347a9-39aa-477c-8108-066202eb6130")) @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.service.edp.job_utils.create_workflow_dir') @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.context.ctx', return_value="ctx") def _setup_run_job(self, master_instance, job_configs, files, ctx, job_get, get_instance, create_workflow_dir, get_remote, job_exec_get, job_exec_update, _generate_topology_name): def _upload_job_files(where, job_dir, job, libs_subdir=True, job_configs=None): paths = [os.path.join(self.workflow_dir, f) for f in files['jars']] return paths job = mock.Mock() job.name = "MyJob" job_get.return_value = job job_exec = mock.Mock() job_exec.job_configs = job_configs create_workflow_dir.return_value = self.workflow_dir # This is to mock "with remote.get_remote(master) as r" in run_job get_remote.return_value.__enter__ = mock.Mock( return_value=master_instance) get_instance.return_value = master_instance eng = se.StormJobEngine("cluster") eng._upload_job_files = mock.Mock() eng._upload_job_files.side_effect = _upload_job_files status = eng.run_job(job_exec) # Check that we launch on the master node get_instance.assert_called_with("cluster", "nimbus") return status def test_run_job_raise(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass", "topology_name": "topology_name"}, } files = {'jars': ["app.jar"]} # The object representing the storm master node # The storm jar command will be run on this instance master_instance = self._make_master_instance(return_code=1) # If execute_command returns an error we should get a raise self.assertRaises(ex.EDPError, self._setup_run_job, master_instance, job_configs, files) def test_run_job(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass"} } files = {'jars': ["app.jar"]} # The object representing the storm master node # The storm jar command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command /usr/local/storm/bin/storm jar ' '-c nimbus.host=master ' '%(workflow_dir)s/app.jar org.me.myclass %(topology_name)s ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "topology_name": ( self.storm_topology_name)}) # Check result here self.assertEqual(("%s@%s" % (self.storm_topology_name, self.master_inst), edp.JOB_STATUS_RUNNING, {"storm-path": self.workflow_dir}), status) def test_run_job_args(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass"}, 'args': ['input_arg', 'output_arg'] } files = {'jars': ["app.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command /usr/local/storm/bin/storm jar ' '-c nimbus.host=master ' '%(workflow_dir)s/app.jar org.me.myclass %(topology_name)s ' 'input_arg output_arg ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "topology_name": ( self.storm_topology_name)}) # Check result here self.assertEqual(("%s@%s" % (self.storm_topology_name, self.master_inst), edp.JOB_STATUS_RUNNING, {"storm-path": self.workflow_dir}), status) sahara-12.0.0/sahara/tests/unit/service/edp/utils/0000775000175000017500000000000013656752227022022 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/utils/test_shares.py0000664000175000017500000003345713656752032024726 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. try: from manilaclient.common.apiclient import exceptions as manila_ex except ImportError: from manilaclient.openstack.common.apiclient import exceptions as manila_ex from unittest import mock from oslo_utils import uuidutils import testtools from sahara import exceptions from sahara.service.edp.utils import shares from sahara.tests.unit import base _NAMENODE_IPS = ['192.168.122.3', '192.168.122.4'] _DATANODE_IPS = ['192.168.122.5', '192.168.122.6', '192.168.122.7'] class _FakeShare(object): def __init__(self, id='12345678-1234-1234-1234-123456789012', share_proto='NFS', export_location='192.168.122.1:/path', access_list=None): self.id = id self.share_proto = share_proto self.export_location = export_location self.allow = mock.Mock() self.deny = mock.Mock() self.access_list = mock.Mock(return_value=access_list or []) def _mock_node_group(ips, share_list): # Returns a mocked node group and a list of mocked # execute_command functions for its instances. execute_mocks = [mock.Mock(return_value="centos") for ip in ips] get_id = mock.Mock(return_value=uuidutils.generate_uuid()) instances = [ mock.Mock( internal_ip=ip, remote=mock.Mock( return_value=mock.Mock( __enter__=mock.Mock( return_value=mock.Mock( execute_command=execute_mocks[index], get_os_distrib=execute_mocks[index])), __exit__=mock.Mock()))) for index, ip in enumerate(ips)] node_group = mock.Mock(instances=instances, shares=share_list, __getitem__=get_id) return node_group, execute_mocks def _setup_calls(): return [ mock.call('rpm -q nfs-utils || yum install -y nfs-utils', run_as_root=True)] def _expected_calls(local_path, remote_path, access_argument): return [ mock.call('mkdir -p %s' % local_path, run_as_root=True), mock.call("mount | grep '%(remote_path)s' | grep '%(local_path)s' | " "grep nfs || mount -t nfs %(access_argument)s " "%(remote_path)s %(local_path)s" % { "local_path": local_path, "remote_path": remote_path, "access_argument": access_argument }, run_as_root=True)] class TestShares(base.SaharaTestCase): @mock.patch('sahara.context.set_current_instance_id') @mock.patch('sahara.utils.openstack.manila.client') def test_mount_nfs_shares_to_ng(self, f_manilaclient, f_context): share = _FakeShare() f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock(return_value=share))) namenode_group, namenode_executors = _mock_node_group( _NAMENODE_IPS, [{ 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'rw', 'path': '/mnt/localpath' }]) datanode_group, datanode_executors = _mock_node_group( _DATANODE_IPS, []) cluster = mock.Mock( node_groups=[namenode_group, datanode_group], shares=[]) shares.mount_shares(cluster) permissions = [mock.call('ip', ip, 'rw') for ip in _NAMENODE_IPS] share.allow.assert_has_calls(permissions, any_order=True) for executor in namenode_executors: executor.assert_has_calls( _setup_calls() + _expected_calls('/mnt/localpath', '192.168.122.1:/path', '-w')) for executor in datanode_executors: self.assertEqual(0, executor.call_count) @mock.patch('sahara.context.set_current_instance_id') @mock.patch('sahara.utils.openstack.manila.client') def test_mount_nfs_shares_to_cluster(self, f_manilaclient, f_context): global_share = _FakeShare() namenode_only_share = _FakeShare( id='DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF', export_location='192.168.122.2:/path') all_shares = {share.id: share for share in (global_share, namenode_only_share)} f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock( side_effect=lambda x: all_shares[x]))) namenode_group, namenode_executors = _mock_node_group( ['192.168.122.3', '192.168.122.4'], [ { 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'rw', 'path': '/mnt/localpath' }, { 'id': 'DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF' } ]) datanode_group, datanode_executors = _mock_node_group( ['192.168.122.5', '192.168.122.6', '192.168.122.7'], []) cluster = mock.Mock( node_groups=[namenode_group, datanode_group], shares=[ { 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'ro', 'path': '/mnt/somanylocalpaths' } ]) shares.mount_shares(cluster) all_permissions = [mock.call('ip', ip, 'ro') for ip in _NAMENODE_IPS + _DATANODE_IPS] global_share.allow.assert_has_calls(all_permissions, any_order=True) namenode_permissions = [mock.call('ip', ip, 'rw') for ip in _NAMENODE_IPS] namenode_only_share.allow.assert_has_calls(namenode_permissions, any_order=True) for executor in namenode_executors: executor.assert_has_calls( _setup_calls() + _expected_calls('/mnt/somanylocalpaths', '192.168.122.1:/path', '-r') + _expected_calls('/mnt/DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF', '192.168.122.2:/path', '-w'), any_order=True) self.assertEqual(6, executor.call_count) for executor in datanode_executors: executor.assert_has_calls( _setup_calls() + _expected_calls('/mnt/somanylocalpaths', '192.168.122.1:/path', '-r')) self.assertEqual(4, executor.call_count) @mock.patch('sahara.context.set_current_instance_id') @mock.patch('sahara.utils.openstack.manila.client') def test_share_does_not_exist(self, f_manilaclient, f_context): f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock( side_effect=manila_ex.NotFound))) namenode_group, namenode_executors = _mock_node_group( ['192.168.122.3', '192.168.122.4'], [ { 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'rw', 'path': '/mnt/localpath' }, { 'id': 'DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF' } ]) datanode_group, datanode_executors = _mock_node_group( ['192.168.122.5', '192.168.122.6', '192.168.122.7'], []) cluster = mock.Mock( node_groups=[namenode_group, datanode_group], shares=[ { 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'ro', 'path': '/mnt/somanylocalpaths' } ]) with testtools.ExpectedException(exceptions.NotFoundException): shares.mount_shares(cluster) @mock.patch('sahara.context.set_current_instance_id') @mock.patch('sahara.utils.openstack.manila.client') def test_acl_exists_unexpected_type(self, f_manilaclient, f_context): share = _FakeShare(access_list=[mock.Mock( access_level='wat', access_to=ip, access_type='ip') for ip in _NAMENODE_IPS]) f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock(return_value=share))) namenode_group, namenode_executors = _mock_node_group( _NAMENODE_IPS, [{ 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'rw', 'path': '/mnt/localpath' }]) datanode_group, datanode_executors = _mock_node_group( _DATANODE_IPS, []) cluster = mock.Mock( node_groups=[namenode_group, datanode_group], shares=[]) shares.mount_shares(cluster) self.assertEqual(0, share.allow.call_count) for executor in namenode_executors: executor.assert_has_calls( _setup_calls() + _expected_calls('/mnt/localpath', '192.168.122.1:/path', '-w')) for executor in datanode_executors: self.assertEqual(0, executor.call_count) @mock.patch('sahara.context.set_current_instance_id') @mock.patch('sahara.utils.openstack.manila.client') def test_acl_exists_no_recreate(self, f_manilaclient, f_context): share = _FakeShare(access_list=[mock.Mock( access_level='rw', access_to=ip, access_type='ip') for ip in _NAMENODE_IPS]) f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock(return_value=share))) namenode_group, namenode_executors = _mock_node_group( _NAMENODE_IPS, [{ 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'ro', 'path': '/mnt/localpath' }]) datanode_group, datanode_executors = _mock_node_group( _DATANODE_IPS, []) cluster = mock.Mock( node_groups=[namenode_group, datanode_group], shares=[]) shares.mount_shares(cluster) self.assertEqual(0, share.allow.call_count) for executor in namenode_executors: executor.assert_has_calls( _setup_calls() + _expected_calls('/mnt/localpath', '192.168.122.1:/path', '-r')) for executor in datanode_executors: self.assertEqual(0, executor.call_count) @mock.patch('sahara.context.set_current_instance_id') @mock.patch('sahara.utils.openstack.manila.client') def test_acl_exists_recreate(self, f_manilaclient, f_context): share = _FakeShare(access_list=[mock.Mock( access_level='ro', access_to=ip, access_type='ip', id="access_id") for ip in _NAMENODE_IPS]) f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock(return_value=share))) namenode_group, namenode_executors = _mock_node_group( _NAMENODE_IPS, [{ 'id': '12345678-1234-1234-1234-123456789012', 'access_level': 'rw', 'path': '/mnt/localpath' }]) datanode_group, datanode_executors = _mock_node_group( _DATANODE_IPS, []) cluster = mock.Mock( node_groups=[namenode_group, datanode_group], shares=[]) shares.mount_shares(cluster) namenode_denials = [mock.call('access_id') for ip in _NAMENODE_IPS] share.deny.assert_has_calls(namenode_denials) namenode_permissions = [mock.call('ip', ip, 'rw') for ip in _NAMENODE_IPS] share.allow.assert_has_calls(namenode_permissions, any_order=True) for executor in namenode_executors: executor.assert_has_calls( _setup_calls() + _expected_calls('/mnt/localpath', '192.168.122.1:/path', '-w')) for executor in datanode_executors: self.assertEqual(0, executor.call_count) def test_get_share_path(self): share_list = [ {'id': 'the_share_id', 'path': '/mnt/mymountpoint'}, {'id': 'the_share_id', 'path': '/mnt/othermountpoint'}, {'id': '123456', 'path': '/mnt/themountpoint'} ] url = 'manila://the_share_id/the_path' path = shares.get_share_path(url, share_list) self.assertEqual("/mnt/mymountpoint/the_path", path) share_list.pop(0) path = shares.get_share_path(url, share_list) self.assertEqual("/mnt/othermountpoint/the_path", path) share_list.pop(0) path = shares.get_share_path(url, share_list) self.assertIsNone(path) @mock.patch('sahara.utils.openstack.manila.client') def test_get_share_path_default(self, f_manilaclient): share_list = [ {'id': 'i_have_no_mnt'} ] share = _FakeShare(share_list[0]['id']) f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock(return_value=share))) url = 'manila://i_have_no_mnt/the_path' path = shares.get_share_path(url, share_list) self.assertEqual("/mnt/i_have_no_mnt/the_path", path) sahara-12.0.0/sahara/tests/unit/service/edp/binary_retrievers/0000775000175000017500000000000013656752227024420 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/binary_retrievers/__init__.py0000664000175000017500000000000013656752032026511 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/binary_retrievers/test_manila.py0000664000175000017500000000565613656752032027300 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import sahara.service.edp.binary_retrievers.manila_share as ms from sahara.tests.unit import base class _FakeShare(object): def __init__(self, id, share_proto='NFS'): self.id = id self.share_proto = share_proto class TestManilaShare(base.SaharaTestCase): def setUp(self): super(TestManilaShare, self).setUp() @mock.patch('sahara.utils.openstack.manila.client') @mock.patch('sahara.conductor.API.cluster_update') @mock.patch('sahara.service.edp.utils.shares.mount_shares') def test_get_file_info(self, mount_shares, cluster_update, f_manilaclient): cluster_shares = [ {'id': 'the_share_id', 'path': '/mnt/mymountpoint'} ] ng_shares = [ {'id': 'the_share_id', 'path': '/mnt/othermountpoint'}, {'id': '123456', 'path': '/mnt/themountpoint'} ] job_binary = mock.Mock() job_binary.url = 'manila://the_share_id/the_path' remote = mock.Mock() remote.instance.node_group.cluster.shares = cluster_shares remote.instance.node_group.shares = ng_shares info = ms.get_file_info(job_binary, remote) self.assertItemsEqual({'path': '/mnt/mymountpoint/the_path', 'type': 'path'}, info) self.assertEqual(0, mount_shares.call_count) self.assertEqual(0, cluster_update.call_count) job_binary.url = 'manila://123456/the_path' info = ms.get_file_info(job_binary, remote) self.assertItemsEqual({'path': '/mnt/themountpoint/the_path', 'type': 'path'}, info) self.assertEqual(0, mount_shares.call_count) self.assertEqual(0, cluster_update.call_count) # This should return a default path, and should cause # a mount at the default location share = _FakeShare("missing_id") f_manilaclient.return_value = mock.Mock( shares=mock.Mock( get=mock.Mock(return_value=share))) job_binary.url = 'manila://missing_id/the_path' info = ms.get_file_info(job_binary, remote) self.assertItemsEqual({'path': '/mnt/missing_id/the_path', 'type': 'path'}, info) self.assertEqual(1, mount_shares.call_count) self.assertEqual(1, cluster_update.call_count) sahara-12.0.0/sahara/tests/unit/service/edp/binary_retrievers/test_internal_swift.py0000664000175000017500000001231113656752032031051 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock import sahara.exceptions as ex from sahara.service.castellan import config as castellan from sahara.service.edp.binary_retrievers import internal_swift as i_s from sahara.tests.unit import base class TestInternalSwift(base.SaharaTestCase): def setUp(self): super(TestInternalSwift, self).setUp() castellan.validate_config() def test__get_raw_data(self): client_instance = mock.Mock() client_instance.head_object = mock.Mock() client_instance.get_object = mock.Mock() job_binary = mock.Mock() job_binary.url = 'swift://container/object' # an object that is too large should raise an exception header = {'content-length': '2048'} client_instance.head_object.return_value = header self.override_config('job_binary_max_KB', 1) self.assertRaises(ex.DataTooBigException, i_s._get_raw_data, job_binary, client_instance) client_instance.head_object.assert_called_once_with('container', 'object') # valid return header = {'content-length': '4'} body = 'data' client_instance.head_object.return_value = header client_instance.get_object.return_value = (header, body) self.assertEqual(body, i_s._get_raw_data(job_binary, client_instance)) client_instance.get_object.assert_called_once_with('container', 'object') def test__validate_job_binary_url(self): @i_s._validate_job_binary_url def empty_method(job_binary): pass job_binary = mock.Mock() # bad swift url should raise an exception job_binary.url = 'notswift://container/object' self.assertRaises(ex.BadJobBinaryException, empty_method, job_binary) # specifying a container should raise an exception job_binary.url = 'swift://container' self.assertRaises(ex.BadJobBinaryException, empty_method, job_binary) @mock.patch( 'sahara.service.edp.binary_retrievers.internal_swift._get_raw_data') @mock.patch('sahara.utils.openstack.swift.client') def test_get_raw_data(self, swift_client, _get_raw_data): client_instance = mock.Mock() swift_client.return_value = client_instance job_binary = mock.Mock() job_binary.url = 'swift://container/object' # embedded credentials job_binary.extra = dict(user='test', password='secret') i_s.get_raw_data(job_binary) swift_client.assert_called_with(username='test', password='secret') _get_raw_data.assert_called_with(job_binary, client_instance) # proxy configs should override embedded credentials proxy_configs = dict(proxy_username='proxytest', proxy_password='proxysecret', proxy_trust_id='proxytrust') i_s.get_raw_data(job_binary, proxy_configs) swift_client.assert_called_with(username='proxytest', password='proxysecret', trust_id='proxytrust') _get_raw_data.assert_called_with(job_binary, client_instance) @mock.patch('sahara.utils.openstack.base.url_for') @mock.patch('sahara.context.ctx') @mock.patch( 'sahara.service.edp.binary_retrievers.internal_swift._get_raw_data') @mock.patch('swiftclient.Connection') def test_get_raw_data_with_context(self, swift_client, _get_raw_data, ctx, url_for): client_instance = mock.Mock() swift_client.return_value = client_instance test_context = mock.Mock() test_context.auth_token = 'testtoken' test_context.auth_plugin = None ctx.return_value = test_context url_for.return_value = 'url_for' job_binary = mock.Mock() job_binary.url = 'swift://container/object' job_binary.extra = dict(user='test', password='secret') i_s.get_raw_data_with_context(job_binary) self.assertEqual([mock.call( auth_version='3', cacert=None, insecure=False, max_backoff=10, preauthtoken='testtoken', preauthurl='url_for', retries=5, retry_on_ratelimit=True, starting_backoff=10)], swift_client.call_args_list) _get_raw_data.assert_called_with(job_binary, client_instance) sahara-12.0.0/sahara/tests/unit/service/edp/binary_retrievers/test_dispatch.py0000664000175000017500000000530113656752032027621 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service.edp.binary_retrievers import dispatch from sahara.tests.unit import base class TestDispatch(base.SaharaTestCase): def setUp(self): super(TestDispatch, self).setUp() @mock.patch('sahara.service.edp.s3_common.get_raw_job_binary_data') @mock.patch('sahara.service.edp.binary_retrievers.' 'manila_share.get_file_info') @mock.patch( 'sahara.service.edp.binary_retrievers.internal_swift.' 'get_raw_data_with_context') @mock.patch( 'sahara.service.edp.binary_retrievers.internal_swift.get_raw_data') @mock.patch('sahara.service.edp.binary_retrievers.sahara_db.get_raw_data') @mock.patch('sahara.context.ctx') def test_get_raw_binary(self, ctx, db_get_raw_data, i_s_get_raw_data, i_s_get_raw_data_with_context, m_s_get_file_info, s3_get_raw_jb_data): ctx.return_value = mock.Mock() job_binary = mock.Mock() job_binary.url = 'internal-db://somebinary' dispatch.get_raw_binary(job_binary) self.assertEqual(1, db_get_raw_data.call_count) job_binary.url = 'swift://container/object' proxy_configs = dict(proxy_username='proxytest', proxy_password='proxysecret', proxy_trust_id='proxytrust') dispatch.get_raw_binary(job_binary, proxy_configs) dispatch.get_raw_binary(job_binary, proxy_configs, with_context=True) dispatch.get_raw_binary(job_binary, with_context=True) self.assertEqual(1, i_s_get_raw_data.call_count) self.assertEqual(2, i_s_get_raw_data_with_context.call_count) job_binary.url = 'manila://the_share_id/the_path' remote = mock.Mock() remote.instance.node_group.cluster.shares = [] remote.instance.node_group.shares = [] dispatch.get_raw_binary(job_binary, remote=remote) self.assertEqual(1, m_s_get_file_info.call_count) job_binary.url = 's3://bucket/object.jar' dispatch.get_raw_binary(job_binary) self.assertEqual(1, s3_get_raw_jb_data.call_count) sahara-12.0.0/sahara/tests/unit/service/edp/test_s3_common.py0000664000175000017500000000757413656752032024177 0ustar zuulzuul00000000000000# Copyright (c) 2017 Massachusetts Open Cloud # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock from sahara import exceptions as ex from sahara.service.edp import s3_common from sahara.tests.unit import base class FakeJB(object): extra = {"accesskey": "access", "secretkey": "my-secret", "endpoint": "pointy-end"} url = "s3://temp/temp" class S3CommonTestCase(base.SaharaTestCase): @mock.patch("botocore.session.Session.create_client") @mock.patch("sahara.service.castellan.utils.get_secret") def test__get_s3_client(self, cast, boto): cast.return_value = "the-actual-password" je = FakeJB().extra s3_common._get_s3_client(je) args = ('s3', None, False, je['endpoint'], je['accesskey'], 'the-actual-password') boto.called_once_with(*args) def test__get_names_from_job_binary_url(self): self.assertEqual( s3_common._get_names_from_job_binary_url("s3://buck"), ["buck"]) self.assertEqual( s3_common._get_names_from_job_binary_url("s3://buck/obj"), ["buck", "obj"]) self.assertEqual( s3_common._get_names_from_job_binary_url("s3://buck/dir/obj"), ["buck", "dir/obj"]) def test__get_raw_job_binary_data(self): jb = mock.Mock() jb.url = "s3://bucket/object" boto_conn = mock.Mock() boto_conn.head_object = mock.Mock() boto_conn.get_object = mock.Mock() self.override_config('job_binary_max_KB', 1) boto_conn.head_object.return_value = {"ContentLength": 1025} self.assertRaises(ex.DataTooBigException, s3_common._get_raw_job_binary_data, jb, boto_conn) reader = mock.Mock() reader.read = lambda: "the binary" boto_conn.get_object.return_value = {"Body": reader} boto_conn.head_object.return_value = {"ContentLength": 1024} s3_common._get_raw_job_binary_data(jb, boto_conn) self.assertEqual(s3_common._get_raw_job_binary_data(jb, boto_conn), "the binary") def _raiser(): raise ValueError reader.read = _raiser self.assertRaises(ex.S3ClientException, s3_common._get_raw_job_binary_data, jb, boto_conn) def test__validate_job_binary_url(self): jb_url = "s3://bucket/object" s3_common._validate_job_binary_url(jb_url) jb_url = "s4://bucket/object" with testtools.ExpectedException(ex.BadJobBinaryException): s3_common._validate_job_binary_url(jb_url) jb_url = "s3://bucket" with testtools.ExpectedException(ex.BadJobBinaryException): s3_common._validate_job_binary_url(jb_url) @mock.patch("sahara.service.edp.s3_common._get_raw_job_binary_data") @mock.patch("sahara.service.edp.s3_common._get_s3_client") @mock.patch("sahara.service.edp.s3_common._validate_job_binary_url") def test_get_raw_job_binary_data(self, validate_jbu, get_s3cl, get_rjbd): get_s3cl.return_value = "this would have been boto" jb = FakeJB() s3_common.get_raw_job_binary_data(jb) validate_jbu.assert_called_once_with(jb.url) get_s3cl.assert_called_once_with(jb.extra) get_rjbd.assert_called_once_with(jb, "this would have been boto") sahara-12.0.0/sahara/tests/unit/service/edp/test_job_manager.py0000664000175000017500000006306713656752032024545 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from unittest import mock import xml.dom.minidom as xml import testtools from sahara import conductor as cond from sahara import exceptions as ex from sahara.plugins import base as pb from sahara.service.castellan import config as castellan from sahara.service.edp import job_manager from sahara.service.edp import job_utils from sahara.service.edp.job_utils import ds_manager from sahara.service.edp.oozie.workflow_creator import workflow_factory from sahara.swift import swift_helper as sw from sahara.swift import utils as su from sahara.tests.unit import base from sahara.tests.unit.service.edp import edp_test_utils as u from sahara.utils import cluster as c_u from sahara.utils import edp from sahara.utils import xmlutils conductor = cond.API _java_main_class = "org.apache.hadoop.examples.WordCount" _java_opts = "-Dparam1=val1 -Dparam2=val2" class TestJobManager(base.SaharaWithDbTestCase): def setUp(self): super(TestJobManager, self).setUp() self.override_config('plugins', ['fake']) pb.setup_plugins() castellan.validate_config() ds_manager.setup_data_sources() @mock.patch('uuid.uuid4') @mock.patch('sahara.utils.remote.get_remote') def test_create_workflow_dir(self, get_remote, uuid4): job = mock.Mock() job.name = "job" # This is to mock "with remote.get_remote(instance) as r" remote_instance = mock.Mock() get_remote.return_value.__enter__ = mock.Mock( return_value=remote_instance) remote_instance.execute_command = mock.Mock() remote_instance.execute_command.return_value = 0, "standard out" uuid4.return_value = "generated_uuid" job_utils.create_workflow_dir("where", "/tmp/somewhere", job, "uuid") remote_instance.execute_command.assert_called_with( "mkdir -p /tmp/somewhere/job/uuid") remote_instance.execute_command.reset_mock() job_utils.create_workflow_dir("where", "/tmp/somewhere", job) remote_instance.execute_command.assert_called_with( "mkdir -p /tmp/somewhere/job/generated_uuid") @mock.patch('sahara.conductor.API.job_binary_get') def test_build_workflow_for_job_pig(self, job_binary): job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG, configs={}) job_binary.return_value = {"name": "script.pig"} input_data = u.create_data_source('swift://ex/i') output_data = u.create_data_source('swift://ex/o') data_source_urls = {input_data.id: input_data.url, output_data.id: output_data.url} res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) self.assertIn(""" INPUT=swift://ex.sahara/i OUTPUT=swift://ex.sahara/o""", res) self.assertIn(""" fs.swift.service.sahara.password admin1 fs.swift.service.sahara.username admin """, res) self.assertIn("", res) # testing workflow creation with a proxy domain self.override_config('use_domain_for_proxy_users', True) self.override_config("proxy_user_domain_name", 'sahara_proxy_domain') job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG, proxy=True) res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) self.assertIn(""" fs.swift.service.sahara.domain.name sahara_proxy_domain fs.swift.service.sahara.password 55555555-6666-7777-8888-999999999999 fs.swift.service.sahara.trust.id 0123456789abcdef0123456789abcdef fs.swift.service.sahara.username job_00000000-1111-2222-3333-4444444444444444 """, res) @mock.patch('sahara.conductor.API.job_binary_get') def test_build_workflow_swift_configs(self, job_binary): # Test that swift configs come from either input or output data sources job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG, configs={}) job_binary.return_value = {"name": "script.pig"} input_data = u.create_data_source('swift://ex/i') output_data = u.create_data_source('hdfs://user/hadoop/out') data_source_urls = {input_data.id: input_data.url, output_data.id: output_data.url} res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) self.assertIn(""" fs.swift.service.sahara.password admin1 fs.swift.service.sahara.username admin """, res) input_data = u.create_data_source('hdfs://user/hadoop/in') output_data = u.create_data_source('swift://ex/o') data_source_urls = {input_data.id: input_data.url, output_data.id: output_data.url} res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) self.assertIn(""" fs.swift.service.sahara.password admin1 fs.swift.service.sahara.username admin """, res) job, job_exec = u.create_job_exec( edp.JOB_TYPE_PIG, configs={'configs': {'dummy': 'value'}}) input_data = u.create_data_source('hdfs://user/hadoop/in') output_data = u.create_data_source('hdfs://user/hadoop/out') data_source_urls = {input_data.id: input_data.url, output_data.id: output_data.url} res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) self.assertIn(""" dummy value """, res) def _build_workflow_common(self, job_type, streaming=False, proxy=False): if streaming: configs = {'edp.streaming.mapper': '/usr/bin/cat', 'edp.streaming.reducer': '/usr/bin/wc'} configs = {'configs': configs} else: configs = {} job, job_exec = u.create_job_exec(job_type, configs) input_data = u.create_data_source('swift://ex/i') output_data = u.create_data_source('swift://ex/o') data_source_urls = {input_data.id: input_data.url, output_data.id: output_data.url} res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) if streaming: self.assertIn(""" /usr/bin/cat /usr/bin/wc """, res) self.assertIn(""" mapred.output.dir swift://ex.sahara/o """, res) self.assertIn(""" mapred.input.dir swift://ex.sahara/i """, res) if not proxy: self.assertIn(""" fs.swift.service.sahara.password admin1 """, res) self.assertIn(""" fs.swift.service.sahara.username admin """, res) else: # testing workflow creation with a proxy domain self.override_config('use_domain_for_proxy_users', True) self.override_config("proxy_user_domain_name", 'sahara_proxy_domain') job, job_exec = u.create_job_exec(job_type, proxy=True) res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) self.assertIn(""" fs.swift.service.sahara.domain.name sahara_proxy_domain fs.swift.service.sahara.password 55555555-6666-7777-8888-999999999999 fs.swift.service.sahara.trust.id 0123456789abcdef0123456789abcdef fs.swift.service.sahara.username job_00000000-1111-2222-3333-4444444444444444 """, res) def test_build_workflow_for_job_mapreduce(self): self._build_workflow_common(edp.JOB_TYPE_MAPREDUCE) self._build_workflow_common(edp.JOB_TYPE_MAPREDUCE, streaming=True) self._build_workflow_common(edp.JOB_TYPE_MAPREDUCE, proxy=True) self._build_workflow_common(edp.JOB_TYPE_MAPREDUCE, streaming=True, proxy=True) def test_build_workflow_for_job_java(self): # If args include swift paths, user and password values # will have to be supplied via configs instead of being # lifted from input or output data sources configs = {sw.HADOOP_SWIFT_USERNAME: 'admin', sw.HADOOP_SWIFT_PASSWORD: 'admin1'} configs = { 'configs': configs, 'args': ['swift://ex/i', 'output_path'] } job, job_exec = u.create_job_exec(edp.JOB_TYPE_JAVA, configs) res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs) self.assertIn(""" fs.swift.service.sahara.password admin1 fs.swift.service.sahara.username admin %s %s swift://ex.sahara/i output_path""" % (_java_main_class, _java_opts), res) # testing workflow creation with a proxy domain self.override_config('use_domain_for_proxy_users', True) self.override_config("proxy_user_domain_name", 'sahara_proxy_domain') configs = { 'configs': {}, 'args': ['swift://ex/i', 'output_path'] } job, job_exec = u.create_job_exec(edp.JOB_TYPE_JAVA, configs, proxy=True) res = workflow_factory.get_workflow_xml(job, u.create_cluster(), job_exec.job_configs) self.assertIn(""" fs.swift.service.sahara.domain.name sahara_proxy_domain fs.swift.service.sahara.password 55555555-6666-7777-8888-999999999999 fs.swift.service.sahara.trust.id 0123456789abcdef0123456789abcdef fs.swift.service.sahara.username job_00000000-1111-2222-3333-4444444444444444 %s %s swift://ex.sahara/i output_path""" % (_java_main_class, _java_opts), res) @mock.patch("sahara.service.edp.oozie.workflow_creator.workflow_factory." "edp.is_adapt_for_oozie_enabled") def test_build_workflow_for_job_java_with_adapter(self, edp_conf_mock): edp_conf_mock.return_value = True configs = {"configs": {"edp.java.main_class": "some_main"}} job, job_exec = u.create_job_exec(edp.JOB_TYPE_JAVA, configs) res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs) self.assertIn( "org.openstack.sahara.edp.MainWrapper", res) self.assertNotIn("some_main", res) @mock.patch('sahara.conductor.API.job_binary_get') def test_build_workflow_for_job_hive(self, job_binary): job, job_exec = u.create_job_exec(edp.JOB_TYPE_HIVE, configs={}) job_binary.return_value = {"name": "script.q"} input_data = u.create_data_source('swift://ex/i') output_data = u.create_data_source('swift://ex/o') data_source_urls = {input_data.id: input_data.url, output_data.id: output_data.url} res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) doc = xml.parseString(res) hive = doc.getElementsByTagName('hive')[0] self.assertEqual('/user/hadoop/conf/hive-site.xml', xmlutils.get_text_from_node(hive, 'job-xml')) configuration = hive.getElementsByTagName('configuration') properties = xmlutils.get_property_dict(configuration[0]) self.assertEqual({'fs.swift.service.sahara.password': 'admin1', 'fs.swift.service.sahara.username': 'admin'}, properties) self.assertEqual('script.q', xmlutils.get_text_from_node(hive, 'script')) params = xmlutils.get_param_dict(hive) self.assertEqual({'INPUT': 'swift://ex.sahara/i', 'OUTPUT': 'swift://ex.sahara/o'}, params) # testing workflow creation with a proxy domain self.override_config('use_domain_for_proxy_users', True) self.override_config("proxy_user_domain_name", 'sahara_proxy_domain') job, job_exec = u.create_job_exec(edp.JOB_TYPE_HIVE, proxy=True) res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs, input_data, output_data, 'hadoop', data_source_urls) doc = xml.parseString(res) hive = doc.getElementsByTagName('hive')[0] configuration = hive.getElementsByTagName('configuration') properties = xmlutils.get_property_dict(configuration[0]) self.assertEqual({ 'fs.swift.service.sahara.domain.name': 'sahara_proxy_domain', 'fs.swift.service.sahara.trust.id': '0123456789abcdef0123456789abcdef', 'fs.swift.service.sahara.password': '55555555-6666-7777-8888-999999999999', 'fs.swift.service.sahara.username': 'job_00000000-1111-2222-3333-4444444444444444'}, properties) def test_build_workflow_for_job_shell(self): configs = {"configs": {"k1": "v1"}, "params": {"p1": "v1"}, "args": ["a1", "a2"]} job, job_exec = u.create_job_exec(edp.JOB_TYPE_SHELL, configs) res = workflow_factory.get_workflow_xml( job, u.create_cluster(), job_exec.job_configs) self.assertIn("k1", res) self.assertIn("v1", res) self.assertIn("p1=v1", res) self.assertIn("a1", res) self.assertIn("a2", res) def test_update_job_dict(self): w = workflow_factory.BaseFactory() job_dict = {'configs': {'default1': 'value1', 'default2': 'value2'}, 'params': {'param1': 'value1', 'param2': 'value2'}, 'args': ['replace this', 'and this']} edp_configs = {'edp.streaming.mapper': '/usr/bin/cat', 'edp.streaming.reducer': '/usr/bin/wc'} configs = {'default2': 'changed'} configs.update(edp_configs) params = {'param1': 'changed'} exec_job_dict = {'configs': configs, 'params': params, 'args': ['replaced']} orig_exec_job_dict = copy.deepcopy(exec_job_dict) w.update_job_dict(job_dict, exec_job_dict) self.assertEqual({'edp_configs': edp_configs, 'configs': {'default1': 'value1', 'default2': 'changed'}, 'params': {'param1': 'changed', 'param2': 'value2'}, 'args': ['replaced']}, job_dict) self.assertEqual(orig_exec_job_dict, exec_job_dict) def test_inject_swift_url_suffix(self): self.assertEqual("swift://ex.sahara/o", su.inject_swift_url_suffix("swift://ex/o")) self.assertEqual("swift://ex.sahara/o", su.inject_swift_url_suffix("swift://ex.sahara/o")) self.assertEqual("hdfs://my/path", su.inject_swift_url_suffix("hdfs://my/path")) self.assertEqual(12345, su.inject_swift_url_suffix(12345)) self.assertEqual(['test'], su.inject_swift_url_suffix(['test'])) @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.service.edp.job_manager._run_job') @mock.patch('sahara.service.edp.job_manager.cancel_job') def test_run_job_handles_exceptions(self, canceljob, runjob, job_ex_get, job_ex_upd): runjob.side_effect = ex.SwiftClientException("Unauthorised") job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) job_exec.engine_job_id = None job_ex_get.return_value = job_exec job_manager.run_job(job_exec.id) self.assertEqual(1, job_ex_get.call_count) self.assertEqual(1, job_ex_upd.call_count) new_status = job_ex_upd.call_args[0][2]["info"]["status"] self.assertEqual(edp.JOB_STATUS_FAILED, new_status) self.assertEqual(0, canceljob.call_count) @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.service.edp.job_manager._run_job') @mock.patch('sahara.service.edp.job_manager.cancel_job') def test_run_job_handles_exceptions_with_run_job(self, canceljob, runjob, job_ex_get, job_ex_upd): runjob.side_effect = ex.OozieException("run_job failed") job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) job_exec.engine_job_id = "fake_oozie_id" job_ex_get.return_value = job_exec job_manager.run_job(job_exec.id) self.assertEqual(1, job_ex_get.call_count) self.assertEqual(1, job_ex_upd.call_count) new_status = job_ex_upd.call_args[0][2]["info"]["status"] self.assertEqual(edp.JOB_STATUS_FAILED, new_status) self.assertEqual(1, canceljob.call_count) def test_get_plugin(self): plugin = job_utils.get_plugin(u.create_cluster()) self.assertEqual("fake", plugin.name) @mock.patch('sahara.conductor.API.job_get') def test_job_type_supported(self, job_get): job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) job_get.return_value = job self.assertIsNotNone(job_manager.get_job_engine(u.create_cluster(), job_exec)) job.type = "unsupported_type" self.assertIsNone(job_manager.get_job_engine(u.create_cluster(), job_exec)) @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.conductor.API.cluster_get') def test_run_job_unsupported_type(self, cluster_get, job_exec_get, job_get): job, job_exec = u.create_job_exec("unsupported_type") job_exec_get.return_value = job_exec job_get.return_value = job cluster = u.create_cluster() cluster.status = c_u.CLUSTER_STATUS_ACTIVE cluster_get.return_value = cluster with testtools.ExpectedException(ex.EDPError): job_manager._run_job(job_exec.id) @mock.patch('sahara.conductor.API.data_source_get') def test_get_input_output_data_sources(self, ds): def _conductor_data_source_get(ctx, id): return mock.Mock(id=id, url="hdfs://obj_" + id, type='hdfs') job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) job_exec.input_id = 's1' job_exec.output_id = 's2' ds.side_effect = _conductor_data_source_get input_source, output_source = ( job_utils.get_input_output_data_sources(job_exec, job, {})) self.assertEqual('hdfs://obj_s1', input_source.url) self.assertEqual('hdfs://obj_s2', output_source.url) def test_get_input_output_data_sources_with_null_id(self): configs = {sw.HADOOP_SWIFT_USERNAME: 'admin', sw.HADOOP_SWIFT_PASSWORD: 'admin1'} configs = { 'configs': configs, 'args': ['hdfs://ex/i', 'output_path'] } job, job_exec = u.create_job_exec(edp.JOB_TYPE_JAVA, configs) job_exec.input_id = None job_exec.output_id = None input_source, output_source = ( job_utils.get_input_output_data_sources(job_exec, job, {})) self.assertIsNone(input_source) self.assertIsNone(output_source) @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.conductor.API.cluster_get') @mock.patch('oslo_utils.timeutils.delta_seconds') def test_failed_to_cancel_job(self, time_get, cluster_get, job_exec_get, job_get, job_execution_update_get): info = { 'status': edp.JOB_STATUS_RUNNING } job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG, None, False, info) job_exec_get.return_value = job_exec job_get.return_value = job cluster = u.create_cluster() cluster.status = c_u.CLUSTER_STATUS_ACTIVE cluster_get.return_value = cluster time_get.return_value = 10000 job_execution_update_get.return_value = job_exec with testtools.ExpectedException(ex.CancelingFailed): job_manager.cancel_job(job_exec.id) @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.conductor.API.cluster_get') @mock.patch('sahara.conductor.API.job_get') @mock.patch( 'sahara.service.edp.oozie.engine.OozieJobEngine.run_scheduled_job') def test_scheduled_edp_job_run(self, job_exec_get, cluster_get, job_get, run_scheduled_job): configs = { 'job_execution_info': { 'job_execution_type': 'scheduled', 'start': '2015-5-15T01:00Z' } } job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG, configs) job_exec_get.return_value = job_exec job_get.return_value = job cluster = u.create_cluster() cluster.status = "Active" cluster_get.return_value = cluster job_manager._run_job(job_exec.id) self.assertEqual(1, run_scheduled_job.call_count) @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.conductor.API.cluster_get') @mock.patch('sahara.service.edp.base_engine.JobEngine.suspend_job') def test_suspend_unsuspendible_job(self, suspend_job_get, cluster_get, job_exec_get, job_get): info = { 'status': edp.JOB_STATUS_SUCCEEDED } job, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG, None, False, info) job_exec_get.return_value = job_exec job_get.return_value = job cluster = u.create_cluster() cluster.status = "Active" cluster_get.return_value = cluster self.assertEqual(0, suspend_job_get.call_count) sahara-12.0.0/sahara/tests/unit/service/edp/edp_test_utils.py0000664000175000017500000000700713656752032024261 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils from sahara import conductor as cond from sahara.utils import edp conductor = cond.API _java_main_class = "org.apache.hadoop.examples.WordCount" _java_opts = "-Dparam1=val1 -Dparam2=val2" def create_job_exec(type, configs=None, proxy=False, info=None): b = create_job_binary('1', type) j = _create_job('2', b, type) _cje_func = _create_job_exec_with_proxy if proxy else _create_job_exec e = _cje_func(j.id, type, configs, info) return j, e def _create_job(id, job_binary, type): job = mock.Mock() job.id = id job.type = type job.name = 'special_name' job.interface = [] if edp.compare_job_type(type, edp.JOB_TYPE_PIG, edp.JOB_TYPE_HIVE): job.mains = [job_binary] job.libs = None else: job.libs = [job_binary] job.mains = None return job def create_job_binary(id, type): binary = mock.Mock() binary.id = id binary.url = "internal-db://42" if edp.compare_job_type(type, edp.JOB_TYPE_PIG): binary.name = "script.pig" elif edp.compare_job_type(type, edp.JOB_TYPE_MAPREDUCE, edp.JOB_TYPE_JAVA): binary.name = "main.jar" else: binary.name = "script.q" return binary def create_cluster(plugin_name='fake', hadoop_version='0.1'): cluster = mock.Mock() cluster.plugin_name = plugin_name cluster.hadoop_version = hadoop_version return cluster def create_data_source(url, name=None, id=None): data_source = mock.Mock() data_source.url = url if url.startswith("swift"): data_source.type = "swift" data_source.credentials = {'user': 'admin', 'password': 'admin1'} elif url.startswith("hdfs"): data_source.type = "hdfs" if name is not None: data_source.name = name if id is not None: data_source.id = id return data_source def _create_job_exec(job_id, type, configs=None, info=None): j_exec = mock.Mock() j_exec.id = uuidutils.generate_uuid() j_exec.job_id = job_id j_exec.job_configs = configs j_exec.info = info j_exec.input_id = 4 j_exec.output_id = 5 j_exec.engine_job_id = None j_exec.data_source_urls = {} if not j_exec.job_configs: j_exec.job_configs = {} if edp.compare_job_type(type, edp.JOB_TYPE_JAVA): j_exec.job_configs['configs']['edp.java.main_class'] = _java_main_class j_exec.job_configs['configs']['edp.java.java_opts'] = _java_opts return j_exec def _create_job_exec_with_proxy(job_id, type, configs=None, info=None): j_exec = _create_job_exec(job_id, type, configs) j_exec.id = '00000000-1111-2222-3333-4444444444444444' j_exec.info = info j_exec.job_configs['proxy_configs'] = { 'proxy_username': 'job_' + j_exec.id, 'proxy_password': '55555555-6666-7777-8888-999999999999', 'proxy_trust_id': '0123456789abcdef0123456789abcdef' } return j_exec sahara-12.0.0/sahara/tests/unit/service/edp/spark/0000775000175000017500000000000013656752227022002 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/spark/__init__.py0000664000175000017500000000000013656752032024073 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/spark/base.py0000664000175000017500000007345013656752032023271 0ustar zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation # Copyright (c) 2015 ISPRAS # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os from unittest import mock import sahara.exceptions as ex from sahara.service.edp.job_utils import ds_manager from sahara.service.edp.spark import engine as se from sahara.tests.unit import base from sahara.utils import edp class TestSpark(base.SaharaTestCase): def setUp(self): super(TestSpark, self).setUp() # These variables are initialized in subclasses because its values # depend on plugin self.master_host = None self.engine_class = None self.spark_user = None self.spark_submit = None self.master = None self.deploy_mode = None self.master_port = 7077 self.master_inst = "6789" self.spark_pid = "12345" self.spark_home = "/opt/spark" self.workflow_dir = "/wfdir" self.driver_cp = "/usr/lib/hadoop-mapreduce/hadoop-openstack.jar:" ds_manager.setup_data_sources() def test_get_pid_and_inst_id(self): '''Test parsing of job ids Test that job ids of the form pid@instance are split into pid and instance ids by eng._get_pid_and_inst_id() but anything else returns empty strings ''' eng = se.SparkJobEngine(None) for job_id in [None, "", "@", "something", "pid@", "@instance"]: pid, inst_id = eng._get_pid_and_inst_id(job_id) self.assertEqual(("", ""), (pid, inst_id)) pid, inst_id = eng._get_pid_and_inst_id("pid@instance") self.assertEqual(("pid", "instance"), (pid, inst_id)) @mock.patch('sahara.utils.cluster.get_instances') def test_get_instance_if_running(self, get_instances): '''Test retrieval of pid and instance object for running job If the job id is valid and the job status is non-terminated, _get_instance_if_running() should retrieve the instance based on the inst_id and return the pid and instance. If the job is invalid or the job is terminated, it should return None, None. If get_instances() throws an exception or returns an empty list, the instance returned should be None (pid might still be set) ''' get_instances.return_value = ["instance"] job_exec = mock.Mock() eng = se.SparkJobEngine("cluster") job_exec.engine_job_id = "invalid id" self.assertEqual((None, None), eng._get_instance_if_running(job_exec)) job_exec.engine_job_id = "pid@inst_id" for state in edp.JOB_STATUSES_TERMINATED: job_exec.info = {'status': state} self.assertEqual((None, None), eng._get_instance_if_running(job_exec)) job_exec.info = {'status': edp.JOB_STATUS_RUNNING} self.assertEqual(("pid", "instance"), eng._get_instance_if_running(job_exec)) get_instances.assert_called_with("cluster", ["inst_id"]) # Pretend get_instances returns nothing get_instances.return_value = [] pid, instance = eng._get_instance_if_running(job_exec) self.assertIsNone(instance) # Pretend get_instances throws an exception get_instances.side_effect = Exception("some failure") pid, instance = eng._get_instance_if_running(job_exec) self.assertIsNone(instance) def test_get_result_file(self): remote = mock.Mock() remote.execute_command.return_value = 999, "value" job_exec = mock.Mock() job_exec.extra = {"spark-path": "/tmp/spark-edp/Job/123"} eng = se.SparkJobEngine("cluster") ret, stdout = eng._get_result_file(remote, job_exec) remote.execute_command.assert_called_with( "cat /tmp/spark-edp/Job/123/result", raise_when_error=False) self.assertEqual((ret, stdout), remote.execute_command.return_value) def test_check_pid(self): remote = mock.Mock() remote.execute_command.return_value = 999, "" eng = se.SparkJobEngine("cluster") ret = eng._check_pid(remote, "pid") remote.execute_command.assert_called_with("ps hp pid", raise_when_error=False) self.assertEqual(999, ret) @mock.patch.object(se.SparkJobEngine, '_get_result_file', autospec=True) @mock.patch.object(se.SparkJobEngine, '_check_pid', autospec=True) def test_get_job_status_from_remote(self, _check_pid, _get_result_file): '''Test retrieval of job status from remote instance If the process is present, status is RUNNING If the process is not present, status depends on the result file If the result file is missing, status is DONEWITHERROR ''' eng = se.SparkJobEngine("cluster") job_exec = mock.Mock() remote = mock.Mock() # Pretend process is running _check_pid.return_value = 0 status = eng._get_job_status_from_remote(remote, "pid", job_exec) _check_pid.assert_called_with(eng, remote, "pid") self.assertEqual({"status": edp.JOB_STATUS_RUNNING}, status) # Pretend process ended and result file contains 0 (success) _check_pid.return_value = 1 _get_result_file.return_value = 0, "0" status = eng._get_job_status_from_remote(remote, "pid", job_exec) self.assertEqual({"status": edp.JOB_STATUS_SUCCEEDED}, status) # Pretend process ended and result file contains 1 (success) _get_result_file.return_value = 0, "1" status = eng._get_job_status_from_remote(remote, "pid", job_exec) self.assertEqual({"status": edp.JOB_STATUS_DONEWITHERROR}, status) # Pretend process ended and result file contains 130 (killed) _get_result_file.return_value = 0, "130" status = eng._get_job_status_from_remote(remote, "pid", job_exec) self.assertEqual({"status": edp.JOB_STATUS_KILLED}, status) # Pretend process ended and result file contains -2 (killed) _get_result_file.return_value = 0, "-2" status = eng._get_job_status_from_remote(remote, "pid", job_exec) self.assertEqual({"status": edp.JOB_STATUS_KILLED}, status) # Pretend process ended and result file is missing _get_result_file.return_value = 1, "" status = eng._get_job_status_from_remote(remote, "pid", job_exec) self.assertEqual({"status": edp.JOB_STATUS_DONEWITHERROR}, status) @mock.patch.object(se.SparkJobEngine, '_get_job_status_from_remote', autospec=True) @mock.patch.object(se.SparkJobEngine, '_get_instance_if_running', autospec=True) @mock.patch('sahara.utils.remote.get_remote') def test_get_job_status(self, get_remote, _get_instance_if_running, _get_job_status_from_remote): # This is to mock "with remote.get_remote(instance) as r" remote_instance = mock.Mock() get_remote.return_value.__enter__ = mock.Mock( return_value=remote_instance) # Pretend instance is not returned _get_instance_if_running.return_value = "pid", None job_exec = mock.Mock() eng = se.SparkJobEngine("cluster") status = eng.get_job_status(job_exec) self.assertIsNone(status) # Pretend we have an instance _get_instance_if_running.return_value = "pid", "instance" _get_job_status_from_remote.return_value = {"status": edp.JOB_STATUS_RUNNING} status = eng.get_job_status(job_exec) _get_job_status_from_remote.assert_called_with(eng, remote_instance, "pid", job_exec) self.assertEqual({"status": edp.JOB_STATUS_RUNNING}, status) @mock.patch.object(se.SparkJobEngine, '_get_instance_if_running', autospec=True, return_value=(None, None)) @mock.patch('sahara.utils.remote.get_remote') def test_cancel_job_null_or_done(self, get_remote, _get_instance_if_running): '''Test cancel_job() when instance is None Test that cancel_job() returns None and does not try to retrieve a remote instance if _get_instance_if_running() returns None ''' eng = se.SparkJobEngine("cluster") job_exec = mock.Mock() self.assertIsNone(eng.cancel_job(job_exec)) self.assertTrue(_get_instance_if_running.called) self.assertFalse(get_remote.called) @mock.patch.object(se.SparkJobEngine, '_get_job_status_from_remote', autospec=True, return_value={"status": edp.JOB_STATUS_KILLED}) @mock.patch.object(se.SparkJobEngine, '_get_instance_if_running', autospec=True, return_value=("pid", "instance")) @mock.patch('sahara.utils.remote.get_remote') def test_cancel_job(self, get_remote, _get_instance_if_running, _get_job_status_from_remote): '''Test cancel_job() with a valid instance For a valid instance, test that cancel_job: * retrieves the remote instance * executes the proper kill command * retrieves the job status (because the remote command is successful) ''' # This is to mock "with remote.get_remote(instance) as r" in cancel_job # and to mock r.execute_command to return success remote_instance = mock.Mock() get_remote.return_value.__enter__ = mock.Mock( return_value=remote_instance) remote_instance.execute_command.return_value = (0, "standard out") eng = se.SparkJobEngine("cluster") job_exec = mock.Mock() status = eng.cancel_job(job_exec) # check that remote.get_remote was called with the result of # eng._get_instance_if_running() get_remote.assert_called_with("instance") # check that execute_command was called with the proper arguments # ("pid" was passed in) remote_instance.execute_command.assert_called_with( "kill -SIGINT pid", raise_when_error=False) # check that the job status was retrieved since the command succeeded _get_job_status_from_remote.assert_called_with(eng, remote_instance, "pid", job_exec) self.assertEqual({"status": edp.JOB_STATUS_KILLED}, status) @mock.patch.object(se.SparkJobEngine, '_get_job_status_from_remote', autospec=True) @mock.patch.object(se.SparkJobEngine, '_get_instance_if_running', autospec=True, return_value=("pid", "instance")) @mock.patch('sahara.utils.remote.get_remote') def test_cancel_job_failed(self, get_remote, _get_instance_if_running, _get_job_status_from_remote): '''Test cancel_job() when remote command fails For a valid instance and a failed kill command, test that cancel_job: * retrieves the remote instance * executes the proper kill command * does not retrieve the job status (because the remote command failed) ''' # This is to mock "with remote.get_remote(instance) as r" # and to mock r.execute_command to return failure remote_instance = mock.Mock() get_remote.return_value.__enter__ = mock.Mock( return_value=remote_instance) remote_instance.execute_command.return_value = (-1, "some error") eng = se.SparkJobEngine("cluster") job_exec = mock.Mock() status = eng.cancel_job(job_exec) # check that remote.get_remote was called with the result of # eng._get_instance_if_running get_remote.assert_called_with("instance") # check that execute_command was called with the proper arguments # ("pid" was passed in) remote_instance.execute_command.assert_called_with( "kill -SIGINT pid", raise_when_error=False) # check that the job status was not retrieved since the command failed self.assertEqual(0, _get_job_status_from_remote.called) # check that we have nothing new to report ... self.assertIsNone(status) @mock.patch('sahara.service.edp.spark.engine.jb_manager') @mock.patch('sahara.utils.remote.get_remote') def test_upload_job_files(self, get_remote, jb_manager): main_names = ["main1", "main2", "main3"] lib_names = ["lib1", "lib2", "lib3"] def make_data_objects(*args): objs = [] for name in args: m = mock.Mock() m.name = name objs.append(m) return objs job = mock.Mock() job.name = "job" job.mains = make_data_objects(*main_names) job.libs = make_data_objects(*lib_names) # This is to mock "with remote.get_remote(instance) as r" remote_instance = mock.Mock() remote_instance.instance.node_group.cluster.shares = [] remote_instance.instance.node_group.shares = [] get_remote.return_value.__enter__ = mock.Mock( return_value=remote_instance) JOB_BINARIES = mock.Mock() mock_jb = mock.Mock() jb_manager.JOB_BINARIES = JOB_BINARIES JOB_BINARIES.get_job_binary_by_url = mock.Mock(return_value=mock_jb) mock_jb.copy_binary_to_cluster = mock.Mock(side_effect=[ '/somedir/main1', '/somedir/main2', '/somedir/main3', '/somedir/lib1', '/somedir/lib2', '/somedir/lib3']) eng = se.SparkJobEngine("cluster") eng._prepare_job_binaries = mock.Mock() paths, builtins = eng._upload_job_files("where", "/somedir", job, {}) self.assertEqual(["/somedir/" + n for n in main_names + lib_names], paths) def _make_master_instance(self, return_code=0): master = mock.Mock() master.execute_command.return_value = (return_code, self.spark_pid) master.get_python_version.return_value = 'python' master.hostname.return_value = self.master_host master.id = self.master_inst return master def _config_values(self, *key): return {("Spark", "Master port", "cluster"): self.master_port, ("Spark", "Spark home", "cluster"): self.spark_home, ("Spark", "Executor extra classpath", "cluster"): self.driver_cp}[key] @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.conductor.API.job_execution_get') @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.plugins.utils.get_config_value_or_default') @mock.patch('sahara.service.edp.job_utils.create_workflow_dir') @mock.patch('sahara.plugins.utils.get_instance') @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.context.ctx', return_value="ctx") def _setup_run_job(self, master_instance, job_configs, files, ctx, job_get, get_instance, create_workflow_dir, get_config_value, get_remote, job_exec_get, job_exec_update): def _upload_job_files(where, job_dir, job, libs_subdir=True, job_configs=None): paths = [os.path.join(self.workflow_dir, f) for f in files['jars']] bltns = files.get('bltns', []) bltns = [os.path.join(self.workflow_dir, f) for f in bltns] return paths, bltns job = mock.Mock() job.name = "MyJob" job_get.return_value = job job_exec = mock.Mock() job_exec.job_configs = job_configs get_config_value.side_effect = self._config_values create_workflow_dir.return_value = self.workflow_dir # This is to mock "with remote.get_remote(master) as r" in run_job get_remote.return_value.__enter__ = mock.Mock( return_value=master_instance) get_instance.return_value = master_instance eng = self.engine_class("cluster") eng._upload_job_files = mock.Mock() eng._upload_job_files.side_effect = _upload_job_files status = eng.run_job(job_exec) # Check that we launch on the master node get_instance.assert_called_with("cluster", self.master_host) return status def test_run_job_raise(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass"}, 'args': ['input_arg', 'output_arg'] } files = {'jars': ["app.jar", "jar1.jar", "jar2.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance(return_code=1) # If execute_command returns an error we should get a raise self.assertRaises(ex.EDPError, self._setup_run_job, master_instance, job_configs, files) def test_run_job_extra_jars_args(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass"}, 'args': ['input_arg', 'output_arg'] } files = {'jars': ["app.jar", "jar1.jar", "jar2.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command %(spark_user)s%(spark_submit)s ' '--class org.me.myclass --jars jar1.jar,jar2.jar ' '--master %(master)s ' '--deploy-mode %(deploy_mode)s ' 'app.jar input_arg output_arg ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "spark_user": self.spark_user, "spark_submit": self.spark_submit, "master": self.master, "deploy_mode": self.deploy_mode}) # Check result here self.assertEqual(("%s@%s" % (self.spark_pid, self.master_inst), edp.JOB_STATUS_RUNNING, {"spark-path": self.workflow_dir}), status) def test_run_job_args(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass"}, 'args': ['input_arg', 'output_arg'] } files = {'jars': ["app.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command %(spark_user)s%(spark_submit)s ' '--class org.me.myclass ' '--master %(master)s ' '--deploy-mode %(deploy_mode)s ' 'app.jar input_arg output_arg ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "spark_user": self.spark_user, "spark_submit": self.spark_submit, "master": self.master, "deploy_mode": self.deploy_mode}) # Check result here self.assertEqual(("%s@%s" % (self.spark_pid, self.master_inst), edp.JOB_STATUS_RUNNING, {"spark-path": self.workflow_dir}), status) def test_run_job(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass"}, } files = {'jars': ["app.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command %(spark_user)s%(spark_submit)s ' '--class org.me.myclass ' '--master %(master)s ' '--deploy-mode %(deploy_mode)s ' 'app.jar ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "spark_user": self.spark_user, "spark_submit": self.spark_submit, "master": self.master, "deploy_mode": self.deploy_mode}) # Check result here self.assertEqual(("%s@%s" % (self.spark_pid, self.master_inst), edp.JOB_STATUS_RUNNING, {"spark-path": self.workflow_dir}), status) def test_run_job_wrapper_extra_jars_args(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass", "edp.spark.adapt_for_swift": True}, 'args': ['input_arg', 'output_arg'] } files = {'jars': ["app.jar", "jar1.jar", "jar2.jar"], 'bltns': ["wrapper.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command %(spark_user)s%(spark_submit)s ' '--driver-class-path %(driver_cp)s ' '--files spark.xml ' '--class org.openstack.sahara.edp.SparkWrapper ' '--jars wrapper.jar,jar1.jar,jar2.jar ' '--master %(master)s ' '--deploy-mode %(deploy_mode)s ' 'app.jar spark.xml org.me.myclass input_arg output_arg ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "spark_user": self.spark_user, "spark_submit": self.spark_submit, "driver_cp": self.driver_cp, "master": self.master, "deploy_mode": self.deploy_mode}) # Check result here self.assertEqual(("%s@%s" % (self.spark_pid, self.master_inst), edp.JOB_STATUS_RUNNING, {"spark-path": self.workflow_dir}), status) def test_run_job_wrapper_args(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass", "edp.spark.adapt_for_swift": True}, 'args': ['input_arg', 'output_arg'] } files = {'jars': ["app.jar"], 'bltns': ["wrapper.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command %(spark_user)s%(spark_submit)s ' '--driver-class-path %(driver_cp)s ' '--files spark.xml ' '--class org.openstack.sahara.edp.SparkWrapper ' '--jars wrapper.jar ' '--master %(master)s ' '--deploy-mode %(deploy_mode)s ' 'app.jar spark.xml org.me.myclass input_arg output_arg ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "spark_user": self.spark_user, "spark_submit": self.spark_submit, "driver_cp": self.driver_cp, "master": self.master, "deploy_mode": self.deploy_mode}) # Check result here self.assertEqual(("%s@%s" % (self.spark_pid, self.master_inst), edp.JOB_STATUS_RUNNING, {"spark-path": self.workflow_dir}), status) def test_run_job_wrapper(self): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass", "edp.spark.adapt_for_swift": True} } files = {'jars': ["app.jar"], 'bltns': ["wrapper.jar"]} # The object representing the spark master node # The spark-submit command will be run on this instance master_instance = self._make_master_instance() status = self._setup_run_job(master_instance, job_configs, files) # Check the command master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command %(spark_user)s%(spark_submit)s ' '--driver-class-path %(driver_cp)s ' '--files spark.xml ' '--class org.openstack.sahara.edp.SparkWrapper ' '--jars wrapper.jar ' '--master %(master)s ' '--deploy-mode %(deploy_mode)s ' 'app.jar spark.xml org.me.myclass ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "spark_user": self.spark_user, "spark_submit": self.spark_submit, "driver_cp": self.driver_cp, "master": self.master, "deploy_mode": self.deploy_mode}) # Check result here self.assertEqual(("%s@%s" % (self.spark_pid, self.master_inst), edp.JOB_STATUS_RUNNING, {"spark-path": self.workflow_dir}), status) @mock.patch('sahara.service.edp.job_utils.prepare_cluster_for_ds') @mock.patch('sahara.service.edp.job_utils.resolve_data_source_references') def test_external_hdfs_config(self, resolver, prepare): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass"}, } files = {'jars': ["app.jar"]} data_source = mock.Mock() data_source.type = 'hdfs' data_source.id = 'id' resolver.return_value = ([data_source], job_configs) master_instance = self._make_master_instance() self._setup_run_job(master_instance, job_configs, files) prepare.assert_called_once() @mock.patch('sahara.service.edp.job_utils.prepare_cluster_for_ds') @mock.patch('sahara.service.edp.job_utils.resolve_data_source_references') def test_overridden_driver_classpath(self, resolver, prepare): job_configs = { 'configs': {"edp.java.main_class": "org.me.myclass", 'edp.spark.driver.classpath': "my-classpath.jar"}, } files = {'jars': ["app.jar"]} data_source = mock.Mock() data_source.type = 'hdfs' data_source.id = 'id' resolver.return_value = ([data_source], job_configs) master_instance = self._make_master_instance() self._setup_run_job(master_instance, job_configs, files) # check that overridden value was applied master_instance.execute_command.assert_called_with( 'cd %(workflow_dir)s; ' './launch_command %(spark_user)s%(spark_submit)s ' '--driver-class-path my-classpath.jar ' '--class org.me.myclass ' '--master %(master)s ' '--deploy-mode %(deploy_mode)s ' 'app.jar ' '> /dev/null 2>&1 & echo $!' % {"workflow_dir": self.workflow_dir, "spark_user": self.spark_user, "spark_submit": self.spark_submit, "master": self.master, "deploy_mode": self.deploy_mode}) sahara-12.0.0/sahara/tests/unit/service/edp/test_job_possible_configs.py0000664000175000017500000000341713656752032026454 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from sahara.service.edp.oozie.workflow_creator import workflow_factory as w_f from sahara.utils import edp class TestJobPossibleConfigs(testtools.TestCase): def test_possible_configs(self): res = w_f.get_possible_job_config(edp.JOB_TYPE_MAPREDUCE) sample_config_property = { 'name': 'mapreduce.jobtracker.expire.trackers.interval', 'value': '600000', 'description': "Expert: The time-interval, in miliseconds, after " "whicha tasktracker is declared 'lost' if it " "doesn't send heartbeats." } self.assertIn(sample_config_property, res['job_config']["configs"]) res = w_f.get_possible_job_config(edp.JOB_TYPE_HIVE) sample_config_property = { "description": "The serde used by FetchTask to serialize the " "fetch output.", "name": "hive.fetch.output.serde", "value": "org.apache.hadoop.hive.serde2.DelimitedJSONSerDe" } self.assertIn(sample_config_property, res["job_config"]['configs']) res = w_f.get_possible_job_config("impossible_config") self.assertIsNone(res) sahara-12.0.0/sahara/tests/unit/service/edp/test_json_api_examples.py0000664000175000017500000000602113656752032025764 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import os from oslo_serialization import jsonutils as json from oslo_utils import uuidutils import testtools from sahara.service.validations.edp import data_source_schema from sahara.service.validations.edp import job_binary_schema from sahara.service.validations.edp import job_execution_schema from sahara.service.validations.edp import job_schema from sahara.utils import api_validator class TestJSONApiExamplesV11(testtools.TestCase): EXAMPLES_PATH = 'etc/edp-examples/json-api-examples/v1.1/%s' def test_data_sources(self): schema = data_source_schema.DATA_SOURCE_SCHEMA path = self.EXAMPLES_PATH % 'data-sources' formatter = self._formatter() self._test(schema, path, formatter) def test_job_binaries(self): schema = job_binary_schema.JOB_BINARY_SCHEMA path = self.EXAMPLES_PATH % 'job-binaries' formatter = self._formatter("job_binary_internal_id", "script_binary_internal_id", "text_binary_internal_id") self._test(schema, path, formatter) def test_jobs(self): schema = job_schema.JOB_SCHEMA path = self.EXAMPLES_PATH % 'jobs' formatter = self._formatter("job_binary_id", "udf_binary_id", "script_binary_id", "text_binary_id") self._test(schema, path, formatter) def test_job_executions(self): schema = job_execution_schema.JOB_EXEC_SCHEMA path = self.EXAMPLES_PATH % 'job-executions' formatter = self._formatter("cluster_id", "input_source_id", "output_source_id") self._test(schema, path, formatter) def _test(self, schema, path, formatter): validator = api_validator.ApiValidator(schema) for filename in self._files_in_path(path): file_path = '/'.join((path, filename)) with open(file_path, 'r') as payload: payload = payload.read() % formatter payload = json.loads(payload) validator.validate(payload) def _files_in_path(self, path): all_files = (files for (path, directories, files) in os.walk(path)) return itertools.chain(*all_files) def _formatter(self, *variables): return {variable: uuidutils.generate_uuid() for variable in variables} sahara-12.0.0/sahara/tests/unit/service/edp/oozie/0000775000175000017500000000000013656752227022007 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/oozie/__init__.py0000664000175000017500000000000013656752032024100 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/oozie/test_oozie.py0000664000175000017500000002642413656752032024547 0ustar zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara import context as ctx from sahara.plugins import base as pb from sahara.service.edp.job_utils import ds_manager from sahara.service.edp.oozie import engine as oe from sahara.service.edp.oozie.engine import jb_manager from sahara.tests.unit import base from sahara.tests.unit.service.edp import edp_test_utils as u from sahara.utils import edp class TestOozieEngine(base.SaharaTestCase): def setUp(self): super(TestOozieEngine, self).setUp() self.override_config('plugins', ['fake']) pb.setup_plugins() jb_manager.setup_job_binaries() ds_manager.setup_data_sources() def test_get_job_status(self): oje = FakeOozieJobEngine(u.create_cluster()) client_class = mock.MagicMock() client_class.add_job = mock.MagicMock(return_value=1) client_class.get_job_info = mock.MagicMock( return_value={'status': 'PENDING'}) oje.get_client = mock.MagicMock(return_value=client_class) _, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) self.assertIsNone(oje.get_job_status(job_exec)) job_exec.engine_job_id = 1 self.assertEqual({'status': 'PENDING'}, oje.get_job_status(job_exec)) def test_add_postfix(self): oje = FakeOozieJobEngine(u.create_cluster()) self.override_config("job_workflow_postfix", 'caba') res = oje._add_postfix('aba') self.assertEqual("aba/caba/", res) self.override_config("job_workflow_postfix", '') res = oje._add_postfix('aba') self.assertEqual("aba/", res) def test_get_oozie_job_params(self): oje = FakeOozieJobEngine(u.create_cluster()) oozie_params = {'oozie.libpath': '/mylibpath', 'oozie.wf.application.path': '/wrong'} scheduled_params = {'start': '2015-06-10T06:05Z', 'end': '2015-06-10T06:50Z', 'frequency': '10'} job_dir = '/job_dir' job_execution_type = 'workflow' job_params = oje._get_oozie_job_params('hadoop', '/tmp', oozie_params, True, scheduled_params, job_dir, job_execution_type) self.assertEqual('http://localhost:50030', job_params["jobTracker"]) self.assertEqual('hdfs://localhost:8020', job_params["nameNode"]) self.assertEqual('hadoop', job_params["user.name"]) self.assertEqual('hdfs://localhost:8020/tmp', job_params['oozie.wf.application.path']) self.assertEqual("/mylibpath,hdfs://localhost:8020/user/" "sahara-hbase-lib", job_params['oozie.libpath']) # Make sure this doesn't raise an exception job_params = oje._get_oozie_job_params('hadoop', '/tmp', {}, True) self.assertEqual("hdfs://localhost:8020/user/" "sahara-hbase-lib", job_params['oozie.libpath']) job_execution_type = 'scheduled' job_params = oje._get_oozie_job_params('hadoop', '/tmp', oozie_params, True, scheduled_params, job_dir, job_execution_type) for i in ["start", "end", "frequency"]: self.assertEqual(scheduled_params[i], job_params[i]) @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.utils.ssh_remote.InstanceInteropHelper') @mock.patch('sahara.conductor.API.job_binary_internal_get_raw_data') def test_hdfs_upload_job_files(self, conductor_raw_data, remote_class, remote): remote_class.__exit__.return_value = 'closed' remote.return_value = remote_class conductor_raw_data.return_value = 'ok' oje = FakeOozieJobEngine(u.create_cluster()) oje._prepare_job_binaries = mock.Mock() job, _ = u.create_job_exec(edp.JOB_TYPE_PIG) res = oje._upload_job_files_to_hdfs(mock.Mock(), 'job_prefix', job, {}) self.assertEqual(['/tmp/script.pig'], res) job, _ = u.create_job_exec(edp.JOB_TYPE_MAPREDUCE) res = oje._upload_job_files_to_hdfs(mock.Mock(), 'job_prefix', job, {}) self.assertEqual(['/tmp/main.jar'], res) @mock.patch('sahara.utils.remote.get_remote') def test_upload_workflow_file(self, remote_get): oje = FakeOozieJobEngine(u.create_cluster()) remote_class = mock.MagicMock() remote_class.__exit__.return_value = 'closed' remote_get.return_value = remote_class res = oje._upload_workflow_file(remote_get, "test", "hadoop.xml", 'hdfs') self.assertEqual("test/workflow.xml", res) @mock.patch('sahara.utils.remote.get_remote') def test_upload_coordinator_file(self, remote_get): oje = FakeOozieJobEngine(u.create_cluster()) remote_class = mock.MagicMock() remote_class.__exit__.return_value = 'closed' remote_get.return_value = remote_class res = oje._upload_coordinator_file(remote_get, "test", "hadoop.xml", 'hdfs') self.assertEqual("test/coordinator.xml", res) @mock.patch('sahara.utils.remote.get_remote') def test_hdfs_create_workflow_dir(self, remote): remote_class = mock.MagicMock() remote_class.__exit__.return_value = 'closed' remote.return_value = remote_class oje = FakeOozieJobEngine(u.create_cluster()) job, _ = u.create_job_exec(edp.JOB_TYPE_PIG) res = oje._create_hdfs_workflow_dir(mock.Mock(), job) self.assertIn('/user/hadoop/special_name/', res) def test__resolve_external_hdfs_urls(self): oje = FakeOozieJobEngine(u.create_cluster()) job_configs = { "configs": { "mapred.map.tasks": "1", "hdfs1": "hdfs://localhost/hdfs1"}, "args": ["hdfs://localhost/hdfs3", "10"], "params": { "param1": "10", "param2": "hdfs://localhost/hdfs2" } } expected_external_hdfs_urls = ['hdfs://localhost/hdfs1', 'hdfs://localhost/hdfs2', 'hdfs://localhost/hdfs3'] external_hdfs_urls = oje._resolve_external_hdfs_urls(job_configs) self.assertEqual(expected_external_hdfs_urls, external_hdfs_urls) @mock.patch('sahara.service.edp.oozie.oozie.OozieClient.get_job_info') @mock.patch('sahara.service.edp.oozie.oozie.OozieClient.kill_job') def test_cancel_job(self, kill_get, info_get): info_get.return_value = {} oje = FakeOozieJobEngine(u.create_cluster()) _, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) # test cancel job without engine_job_id job_exec.engine_job_id = None oje.cancel_job(job_exec) self.assertEqual(0, kill_get.call_count) # test cancel job with engine_job_id job_exec.engine_job_id = 123 oje.cancel_job(job_exec) self.assertEqual(1, kill_get.call_count) @mock.patch('sahara.service.edp.job_utils.prepare_cluster_for_ds') @mock.patch('sahara.service.edp.job_utils._get_data_source_urls') @mock.patch('sahara.service.edp.oozie.workflow_creator.' 'workflow_factory.get_workflow_xml') @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.conductor.API.data_source_get') @mock.patch('sahara.conductor.API.job_get') def test_prepare_run_job(self, job, data_source, update, remote, wf_factory, get_ds_urls, prepare_cluster): wf_factory.return_value = mock.MagicMock() remote_class = mock.MagicMock() remote_class.__exit__.return_value = 'closed' remote.return_value = remote_class job_class = mock.MagicMock() job_class.name = "myJob" job.return_value = job_class source = mock.MagicMock() source.url = "localhost" get_ds_urls.return_value = ('url', 'url') data_source.return_value = source oje = FakeOozieJobEngine(u.create_cluster()) _, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) update.return_value = job_exec res = oje._prepare_run_job(job_exec) self.assertEqual(ctx.ctx(), res['context']) self.assertEqual('hadoop', res['hdfs_user']) self.assertEqual(job_exec, res['job_execution']) self.assertEqual({}, res['oozie_params']) @mock.patch('sahara.service.edp.job_utils.prepare_cluster_for_ds') @mock.patch('sahara.service.edp.job_utils._get_data_source_urls') @mock.patch('sahara.service.edp.oozie.workflow_creator.' 'workflow_factory.get_workflow_xml') @mock.patch('sahara.utils.remote.get_remote') @mock.patch('sahara.conductor.API.job_execution_update') @mock.patch('sahara.conductor.API.data_source_get') @mock.patch('sahara.conductor.API.job_get') @mock.patch('sahara.conductor.API.job_execution_get') def test_run_job(self, exec_get, job, data_source, update, remote, wf_factory, get_ds_urls, prepare_cluster): wf_factory.return_value = mock.MagicMock() remote_class = mock.MagicMock() remote_class.__exit__.return_value = 'closed' remote.return_value = remote_class job_class = mock.MagicMock() job.return_value = job_class job.name = "myJob" source = mock.MagicMock() source.url = "localhost" data_source.return_value = source get_ds_urls.return_value = ('url', 'url') oje = FakeOozieJobEngine(u.create_cluster()) client_class = mock.MagicMock() client_class.add_job = mock.MagicMock(return_value=1) client_class.get_job_info = mock.MagicMock( return_value={'status': 'PENDING'}) oje.get_client = mock.MagicMock(return_value=client_class) _, job_exec = u.create_job_exec(edp.JOB_TYPE_PIG) update.return_value = job_exec self.assertEqual((1, 'PENDING', None), oje.run_job(job_exec)) class FakeOozieJobEngine(oe.OozieJobEngine): def get_hdfs_user(self): return 'hadoop' def create_hdfs_dir(self, remote, dir_name): return def get_oozie_server_uri(self, cluster): return 'http://localhost:11000/oozie' def get_oozie_server(self, cluster): return None def get_name_node_uri(self, cluster): return 'hdfs://localhost:8020' def get_resource_manager_uri(self, cluster): return 'http://localhost:50030' sahara-12.0.0/sahara/tests/unit/service/edp/workflow_creator/0000775000175000017500000000000013656752227024253 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/workflow_creator/__init__.py0000664000175000017500000000000013656752032026344 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/workflow_creator/test_create_workflow.py0000664000175000017500000002244113656752032031056 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools import sahara.exceptions as ex from sahara.service.edp.oozie.workflow_creator import hive_workflow as hw from sahara.service.edp.oozie.workflow_creator import java_workflow as jw from sahara.service.edp.oozie.workflow_creator import mapreduce_workflow as mrw from sahara.service.edp.oozie.workflow_creator import pig_workflow as pw from sahara.service.edp.oozie.workflow_creator import shell_workflow as shw class TestWorkflowCreators(testtools.TestCase): def setUp(self): super(TestWorkflowCreators, self).setUp() self.prepare = {'delete': ['delete_dir_1', 'delete_dir_2'], 'mkdir': ['mkdir_1']} self.job_xml = 'job_xml.xml' self.configuration = {'conf_param_1': 'conf_value_1', 'conf_param_2': 'conf_value_3'} self.files = ['file1', 'file2'] self.archives = ['arch1'] self.streaming = {'mapper': '/usr/bin/cat', 'reducer': '/usr/bin/wc'} def test_create_mapreduce_streaming(self): mr_action = """ /usr/bin/cat /usr/bin/wc """ mr_workflow = mrw.MapReduceWorkFlowCreator() mr_workflow.build_workflow_xml(self.prepare, self.job_xml, self.configuration, self.files, self.archives, self.streaming) res = mr_workflow.get_built_workflow_xml() self.assertIn(mr_action, res) mr_workflow = mrw.MapReduceWorkFlowCreator() mr_workflow.build_workflow_xml(self.prepare, self.job_xml, self.configuration, self.files, self.archives) res = mr_workflow.get_built_workflow_xml() self.assertNotIn(mr_action, res) mr_workflow = mrw.MapReduceWorkFlowCreator() with testtools.ExpectedException(ex.NotFoundException): mr_workflow.build_workflow_xml(self.prepare, self.job_xml, self.configuration, self.files, self.archives, {'bogus': 'element'}) def test_create_mapreduce_workflow(self): mr_workflow = mrw.MapReduceWorkFlowCreator() mr_workflow.build_workflow_xml(self.prepare, self.job_xml, self.configuration, self.files, self.archives) res = mr_workflow.get_built_workflow_xml() mr_action = """ ${jobTracker} ${nameNode} job_xml.xml conf_param_1 conf_value_1 conf_param_2 conf_value_3 file1 file2 arch1 """ self.assertIn(mr_action, res) def test_create_pig_workflow(self): pig_workflow = pw.PigWorkflowCreator() pig_script = 'script.pig' param_dict = {'param1': 'param_value1'} args = ['arg_value1', 'arg_value2'] pig_workflow.build_workflow_xml(pig_script, self.prepare, self.job_xml, self.configuration, param_dict, args, self.files, self.archives) res = pig_workflow.get_built_workflow_xml() pig_action = """ ${jobTracker} ${nameNode} job_xml.xml conf_param_1 conf_value_1 conf_param_2 conf_value_3 param1=param_value1 arg_value1 arg_value2 file1 file2 arch1 """ self.assertIn(pig_action, res) def test_create_hive_workflow(self): hive_workflow = hw.HiveWorkflowCreator() hive_script = "script.q" params = {"key": "value", "key2": "value2"} hive_workflow.build_workflow_xml(hive_script, self.job_xml, self.prepare, self.configuration, params, self.files, self.archives) res = hive_workflow.get_built_workflow_xml() hive_action = """ ${jobTracker} ${nameNode} job_xml.xml conf_param_1 conf_value_1 conf_param_2 conf_value_3 key=value key2=value2 file1 file2 arch1 """ self.assertIn(hive_action, res) def test_create_java_workflow(self): java_workflow = jw.JavaWorkflowCreator() main_class = 'org.apache.hadoop.examples.SomeClass' args = ['/user/hadoop/input', '/user/hadoop/output'] java_opts = '-Dparam1=val1 -Dparam2=val2' java_workflow.build_workflow_xml(main_class, self.prepare, self.job_xml, self.configuration, java_opts, args, self.files, self.archives) res = java_workflow.get_built_workflow_xml() java_action = """ ${jobTracker} ${nameNode} job_xml.xml conf_param_1 conf_value_1 conf_param_2 conf_value_3 org.apache.hadoop.examples.SomeClass -Dparam1=val1 -Dparam2=val2 /user/hadoop/input /user/hadoop/output file1 file2 arch1 """ self.assertIn(java_action, res) def test_create_shell_workflow(self): shell_workflow = shw.ShellWorkflowCreator() main_class = 'doit.sh' args = ['now'] env_vars = {"VERSION": 3} shell_workflow.build_workflow_xml(main_class, self.prepare, self.job_xml, self.configuration, env_vars, args, self.files) res = shell_workflow.get_built_workflow_xml() shell_action = """ ${jobTracker} ${nameNode} conf_param_1 conf_value_1 conf_param_2 conf_value_3 doit.sh now VERSION=3 file1 file2 doit.sh """ self.assertIn(shell_action, res) sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/0000775000175000017500000000000013656752227023336 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/__init__.py0000664000175000017500000000000013656752032025427 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/maprfs/0000775000175000017500000000000013656752227024626 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/maprfs/__init__.py0000664000175000017500000000000013656752032026717 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/maprfs/test_maprfs_type_validation.py0000664000175000017500000000434413656752032033001 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools import sahara.exceptions as ex from sahara.service.edp.data_sources.maprfs.implementation import MapRFSType from sahara.tests.unit import base class TestMapRFSTypeValidation(base.SaharaTestCase): def setUp(self): super(TestMapRFSTypeValidation, self).setUp() self.maprfs_type = MapRFSType() def test_maprfs_type_validation_wrong_schema(self): data = { "name": "test_data_data_source", "url": "maprf://test_cluster/", "type": "maprfs", "description": "incorrect url schema" } with testtools.ExpectedException(ex.InvalidDataException): self.maprfs_type.validate(data) def test_maprfs_type_validation_correct_url(self): data = { "name": "test_data_data_source", "url": "maprfs:///test_cluster/", "type": "maprfs", "description": "correct url schema" } self.maprfs_type.validate(data) def test_maprfs_type_validation_local_rel_url(self): data = { "name": "test_data_data_source", "url": "mydata/input", "type": "maprfs", "description": ("correct url schema for" " relative path on local maprfs") } self.maprfs_type.validate(data) def test_maprfs_type_validation_local_abs_url(self): data = { "name": "test_data_data_source", "url": "/tmp/output", "type": "maprfs", "description": ("correct url schema for" " absolute path on local maprfs") } self.maprfs_type.validate(data) sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/base_test.py0000664000175000017500000000455713656752032025666 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils from sahara.service.edp.data_sources.base import DataSourceType import testtools class DataSourceBaseTestCase(testtools.TestCase): def setUp(self): super(DataSourceBaseTestCase, self).setUp() self.ds_base = DataSourceType() def test_construct_url_no_placeholders(self): base_url = "swift://container/input" job_exec_id = uuidutils.generate_uuid() url = self.ds_base.construct_url(base_url, job_exec_id) self.assertEqual(base_url, url) def test_construct_url_job_exec_id_placeholder(self): base_url = "swift://container/input.%JOB_EXEC_ID%.out" job_exec_id = uuidutils.generate_uuid() url = self.ds_base.construct_url(base_url, job_exec_id) self.assertEqual( "swift://container/input." + job_exec_id + ".out", url) def test_construct_url_randstr_placeholder(self): base_url = "swift://container/input.%RANDSTR(4)%.%RANDSTR(7)%.out" job_exec_id = uuidutils.generate_uuid() url = self.ds_base.construct_url(base_url, job_exec_id) self.assertRegex( url, "swift://container/input\.[a-z]{4}\.[a-z]{7}\.out") def test_construct_url_randstr_and_job_exec_id_placeholder(self): base_url = "swift://container/input.%JOB_EXEC_ID%.%RANDSTR(7)%.out" job_exec_id = uuidutils.generate_uuid() url = self.ds_base.construct_url(base_url, job_exec_id) self.assertRegex( url, "swift://container/input." + job_exec_id + "\.[a-z]{7}\.out") def test_get_urls(self): url = 'test://url' cluster = mock.Mock() job_exec_id = 'test_id' self.assertEqual((url, url), self.ds_base.get_urls(url, cluster, job_exec_id)) sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/s3/0000775000175000017500000000000013656752227023663 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/s3/__init__.py0000664000175000017500000000000013656752032025754 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/s3/test_s3_type.py0000664000175000017500000000771413656752032026665 0ustar zuulzuul00000000000000# Copyright (c) 2018 OpenStack Contributors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock import sahara.exceptions as ex from sahara.service.edp.data_sources.s3.implementation import S3Type from sahara.tests.unit import base from sahara.utils.types import FrozenDict class TestSwiftType(base.SaharaTestCase): def setUp(self): super(TestSwiftType, self).setUp() self.s_type = S3Type() def test_validate(self): data = { "name": "test_data_data_source", "type": "s3", "url": "s3a://mybucket/myobject", } self.s_type.validate(data) data["url"] = "s3://mybucket/myobject" self.s_type.validate(data) creds = {} data["credentials"] = creds self.s_type.validate(data) creds["accesskey"] = "key" creds["secretkey"] = "key2" self.s_type.validate(data) creds["bucket_in_path"] = True creds["ssl"] = True creds["endpoint"] = "blah.org" self.s_type.validate(data) creds["cool_key"] = "wow" with testtools.ExpectedException(ex.InvalidDataException): self.s_type.validate(data) creds.pop("cool_key") creds["ssl"] = "yeah" with testtools.ExpectedException(ex.InvalidDataException): self.s_type.validate(data) creds["ssl"] = True creds["bucket_in_path"] = "yeah" with testtools.ExpectedException(ex.InvalidDataException): self.s_type.validate(data) def test_validate_url(self): url = "" with testtools.ExpectedException(ex.InvalidDataException): self.s_type._validate_url(url) url = "s3a://" with testtools.ExpectedException(ex.InvalidDataException): self.s_type._validate_url(url) url = "s3a:///" with testtools.ExpectedException(ex.InvalidDataException): self.s_type._validate_url(url) url = "s3a://bucket" with testtools.ExpectedException(ex.InvalidDataException): self.s_type._validate_url(url) url = "s3b://bucket/obj" with testtools.ExpectedException(ex.InvalidDataException): self.s_type._validate_url(url) url = "s3a://bucket/obj" self.s_type._validate_url(url) url = "s3a://bucket/fold/obj" self.s_type._validate_url(url) url = "s3a://bucket/obj/" self.s_type._validate_url(url) def test_prepare_cluster(self): ds = mock.Mock() cluster = mock.Mock() ds.credentials = {} job_configs = {} self.s_type.prepare_cluster(ds, cluster, job_configs=job_configs) self.assertEqual(job_configs, {}) job_configs['configs'] = {} ds.credentials['accesskey'] = 'key' self.s_type.prepare_cluster(ds, cluster, job_configs=job_configs) self.assertEqual(job_configs['configs'], {'fs.s3a.access.key': 'key'}) job_configs['configs'] = {'fs.s3a.access.key': 'key2'} self.s_type.prepare_cluster(ds, cluster, job_configs=job_configs) self.assertEqual(job_configs['configs'], {'fs.s3a.access.key': 'key2'}) job_configs = FrozenDict({'configs': {}}) self.s_type.prepare_cluster(ds, cluster, job_configs=job_configs) self.assertNotIn(job_configs['configs'], 'accesskey') job_configs = {} self.s_type.prepare_cluster(ds, cluster, job_configs=job_configs) self.assertEqual(job_configs, {}) sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/swift/0000775000175000017500000000000013656752227024472 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/swift/__init__.py0000664000175000017500000000000013656752032026563 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/swift/test_swift_type.py0000664000175000017500000001777013656752032030306 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from unittest import mock from oslo_utils import uuidutils import testtools import sahara.exceptions as ex from sahara.service.edp.data_sources.swift.implementation import SwiftType from sahara.service.edp import job_utils from sahara.swift import utils as su from sahara.tests.unit import base from sahara.tests.unit.service.edp import edp_test_utils as u from sahara.utils.types import FrozenDict SAMPLE_SWIFT_URL = "swift://1234/object" SAMPLE_SWIFT_URL_WITH_SUFFIX = "swift://1234%s/object" % su.SWIFT_URL_SUFFIX class TestSwiftTypeValidation(base.SaharaTestCase): def setUp(self): super(TestSwiftTypeValidation, self).setUp() self.s_type = SwiftType() @mock.patch('sahara.context.ctx') def test_prepare_cluster(self, ctx): ctx.return_value = 'dummy' ds_url = "swift://container/input" ds = u.create_data_source(ds_url, name="data_source", id=uuidutils.generate_uuid()) job_configs = { 'configs': { job_utils.DATA_SOURCE_SUBST_NAME: True, job_utils.DATA_SOURCE_SUBST_UUID: True } } old_configs = copy.deepcopy(job_configs) self.s_type.prepare_cluster(ds, u.create_cluster(), job_configs=job_configs) # Swift configs should be filled in since they were blank self.assertEqual(ds.credentials['user'], job_configs['configs'] ['fs.swift.service.sahara.username']) self.assertEqual(ds.credentials['password'], job_configs['configs'] ['fs.swift.service.sahara.password']) self.assertNotEqual(old_configs, job_configs) job_configs['configs'] = {'fs.swift.service.sahara.username': 'sam', 'fs.swift.service.sahara.password': 'gamgee', job_utils.DATA_SOURCE_SUBST_NAME: False, job_utils.DATA_SOURCE_SUBST_UUID: True} old_configs = copy.deepcopy(job_configs) self.s_type.prepare_cluster(ds, u.create_cluster(), job_configs=job_configs) # Swift configs should not be overwritten self.assertEqual(old_configs['configs'], job_configs['configs']) job_configs['configs'] = {job_utils.DATA_SOURCE_SUBST_NAME: True, job_utils.DATA_SOURCE_SUBST_UUID: False} job_configs['proxy_configs'] = {'proxy_username': 'john', 'proxy_password': 'smith', 'proxy_trust_id': 'trustme'} old_configs = copy.deepcopy(job_configs) self.s_type.prepare_cluster(ds, u.create_cluster(), job_configs=job_configs) # Swift configs should be empty and proxy configs should be preserved self.assertEqual(old_configs['configs'], job_configs['configs']) self.assertEqual(old_configs['proxy_configs'], job_configs['proxy_configs']) # If there's no configs do nothing job_configs['configs'] = None old_configs = copy.deepcopy(job_configs) self.s_type.prepare_cluster(ds, u.create_cluster(), job_configs=job_configs) self.assertEqual(old_configs, job_configs) # If it's a FrozenDict do nothing job_configs = { 'configs': { job_utils.DATA_SOURCE_SUBST_NAME: True, job_utils.DATA_SOURCE_SUBST_UUID: True } } old_configs = copy.deepcopy(job_configs) job_configs = FrozenDict(job_configs) self.s_type.prepare_cluster(ds, u.create_cluster(), job_configs=job_configs) self.assertEqual(old_configs, job_configs) def test_swift_type_validation(self): data = { "name": "test_data_data_source", "url": SAMPLE_SWIFT_URL, "type": "swift", "credentials": { "user": "user", "password": "password" }, "description": "long description" } self.s_type.validate(data) def test_swift_type_validation_missing_credentials(self): data = { "name": "test_data_data_source", "url": SAMPLE_SWIFT_URL, "type": "swift", "description": "long description" } with testtools.ExpectedException(ex.InvalidCredentials): self.s_type.validate(data) # proxy enabled should allow creation without credentials self.override_config('use_domain_for_proxy_users', True) self.s_type.validate(data) def test_swift_type_validation_credentials_missing_user(self): data = { "name": "test_data_data_source", "url": SAMPLE_SWIFT_URL, "type": "swift", "credentials": { "password": "password" }, "description": "long description" } with testtools.ExpectedException(ex.InvalidCredentials): self.s_type.validate(data) # proxy enabled should allow creation without credentials self.override_config('use_domain_for_proxy_users', True) self.s_type.validate(data) def test_swift_type_validation_credentials_missing_password(self): data = { "name": "test_data_data_source", "url": SAMPLE_SWIFT_URL, "type": "swift", "credentials": { "user": "user", }, "description": "long description" } with testtools.ExpectedException(ex.InvalidCredentials): self.s_type.validate(data) # proxy enabled should allow creation without credentials self.override_config('use_domain_for_proxy_users', True) self.s_type.validate(data) def test_swift_type_validation_wrong_schema(self): data = { "name": "test_data_data_source", "url": "swif://1234/object", "type": "swift", "description": "incorrect url schema" } with testtools.ExpectedException(ex.InvalidDataException): self.s_type.validate(data) def test_swift_type_validation_explicit_suffix(self): data = { "name": "test_data_data_source", "url": SAMPLE_SWIFT_URL_WITH_SUFFIX, "type": "swift", "description": "incorrect url schema", "credentials": { "user": "user", "password": "password" } } self.s_type.validate(data) def test_swift_type_validation_wrong_suffix(self): data = { "name": "test_data_data_source", "url": "swift://1234.suffix/object", "type": "swift", "description": "incorrect url schema" } with testtools.ExpectedException(ex.InvalidDataException): self.s_type.validate(data) def test_swift_type_validation_missing_object(self): data = { "name": "test_data_data_source", "url": "swift://1234/", "type": "swift", "description": "incorrect url schema" } with testtools.ExpectedException(ex.InvalidDataException): self.s_type.validate(data) sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/data_source_manager_support_test.py0000664000175000017500000000535613656752032032531 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools import sahara.exceptions as ex from sahara.service.edp.data_sources import manager as ds_manager from sahara.tests.unit import base class DataSourceManagerSupportTest(base.SaharaTestCase): def setUp(self): super(DataSourceManagerSupportTest, self).setUp() ds_manager.setup_data_sources() def test_data_sources_loaded(self): ds_types = [ds.name for ds in ds_manager.DATA_SOURCES.get_data_sources()] self.assertIn('hdfs', ds_types) self.assertIn('manila', ds_types) self.assertIn('maprfs', ds_types) self.assertIn('swift', ds_types) def test_get_data_source_by_url(self): with testtools.ExpectedException(ex.InvalidDataException): ds_manager.DATA_SOURCES.get_data_source_by_url('') with testtools.ExpectedException(ex.InvalidDataException): ds_manager.DATA_SOURCES.get_data_source_by_url('hdfs') self.assertEqual('hdfs', ds_manager.DATA_SOURCES .get_data_source_by_url('hdfs://').name) self.assertEqual('manila', ds_manager.DATA_SOURCES .get_data_source_by_url('manila://').name) self.assertEqual('maprfs', ds_manager.DATA_SOURCES .get_data_source_by_url('maprfs://').name) self.assertEqual('swift', ds_manager.DATA_SOURCES .get_data_source_by_url('swift://').name) def test_get_data_source(self): with testtools.ExpectedException(ex.InvalidDataException): ds_manager.DATA_SOURCES.get_data_source('') with testtools.ExpectedException(ex.InvalidDataException): ds_manager.DATA_SOURCES.get_data_source('hdf') self.assertEqual('hdfs', ds_manager.DATA_SOURCES .get_data_source('hdfs').name) self.assertEqual('manila', ds_manager.DATA_SOURCES .get_data_source('manila').name) self.assertEqual('maprfs', ds_manager.DATA_SOURCES .get_data_source('maprfs').name) self.assertEqual('swift', ds_manager.DATA_SOURCES .get_data_source('swift').name) sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/hdfs/0000775000175000017500000000000013656752227024262 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/hdfs/__init__.py0000664000175000017500000000000013656752032026353 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/hdfs/test_hdfs_type.py0000664000175000017500000000524613656752032027661 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools from unittest import mock import sahara.exceptions as ex from sahara.service.edp.data_sources.hdfs.implementation import HDFSType from sahara.tests.unit import base class TestHDFSType(base.SaharaTestCase): def setUp(self): super(TestHDFSType, self).setUp() self.hdfs_type = HDFSType() def test_hdfs_type_validation_wrong_schema(self): data = { "name": "test_data_data_source", "url": "hdf://test_cluster/", "type": "hdfs", "description": "incorrect url schema" } with testtools.ExpectedException(ex.InvalidDataException): self.hdfs_type.validate(data) def test_hdfs_type_validation_correct_url(self): data = { "name": "test_data_data_source", "url": "hdfs://test_cluster/", "type": "hdfs", "description": "correct url schema" } self.hdfs_type.validate(data) def test_hdfs_type_validation_local_rel_url(self): data = { "name": "test_data_data_source", "url": "mydata/input", "type": "hdfs", "description": "correct url schema for relative path on local hdfs" } self.hdfs_type.validate(data) def test_hdfs_type_validation_local_abs_url(self): data = { "name": "test_data_data_source", "url": "/tmp/output", "type": "hdfs", "description": "correct url schema for absolute path on local hdfs" } self.hdfs_type.validate(data) @mock.patch('sahara.service.edp.data_sources.hdfs.implementation.h') def test_prepare_cluster(self, mock_h): cluster = mock.Mock() data_source = mock.Mock() runtime_url = "runtime_url" mock_h.configure_cluster_for_hdfs = mock.Mock() self.hdfs_type.prepare_cluster(data_source, cluster, runtime_url=runtime_url) mock_h.configure_cluster_for_hdfs.assert_called_once_with(cluster, runtime_url) sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/manila/0000775000175000017500000000000013656752227024577 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/manila/__init__.py0000664000175000017500000000000013656752032026670 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/edp/data_sources/manila/test_manila_type.py0000664000175000017500000001225413656752032030510 0ustar zuulzuul00000000000000# Copyright (c) 2017 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils import testtools import sahara.exceptions as ex from sahara.service.edp.data_sources.manila.implementation import ManilaType from sahara.tests.unit import base class _FakeShare(object): def __init__(self, id, share_proto='NFS'): self.id = id self.share_proto = share_proto class TestManilaType(base.SaharaTestCase): def setUp(self): super(TestManilaType, self).setUp() self.manila_type = ManilaType() @mock.patch('sahara.utils.openstack.manila.client') @mock.patch('sahara.conductor.API.cluster_update') @mock.patch('sahara.service.edp.utils.shares.mount_shares') def test_prepare_cluster(self, mount_shares, cluster_update, f_manilaclient): cluster_shares = [ {'id': 'the_share_id', 'path': '/mnt/mymountpoint'} ] cluster = mock.Mock() cluster.shares = cluster_shares # This should return a default path, and should cause # a mount at the default location share = _FakeShare("missing_id") f_manilaclient.return_value = mock.Mock(shares=mock.Mock( get=mock.Mock(return_value=share))) url = 'manila://missing_id/the_path' self.manila_type._prepare_cluster(url, cluster) self.assertEqual(1, mount_shares.call_count) self.assertEqual(1, cluster_update.call_count) @mock.patch('sahara.service.edp.utils.shares.get_share_path') @mock.patch('sahara.utils.openstack.manila.client') @mock.patch('sahara.conductor.API.cluster_update') @mock.patch('sahara.service.edp.utils.shares.mount_shares') def test_get_runtime_url(self, mount_shares, cluster_update, f_manilaclient, get_share_path): # first it finds the path, then it doesn't so it has to mount it # and only then it finds it get_share_path.side_effect = ['/mnt/mymountpoint/the_path', None, '/mnt/missing_id/the_path'] cluster = mock.Mock() cluster.shares = [] url = 'manila://the_share_id/the_path' res = self.manila_type.get_runtime_url(url, cluster) self.assertEqual('file:///mnt/mymountpoint/the_path', res) self.assertEqual(0, mount_shares.call_count) self.assertEqual(0, cluster_update.call_count) # This should return a default path, and should cause # a mount at the default location share = _FakeShare("missing_id") f_manilaclient.return_value = mock.Mock(shares=mock.Mock( get=mock.Mock(return_value=share))) url = 'manila://missing_id/the_path' res = self.manila_type.get_runtime_url(url, cluster) self.assertEqual('file:///mnt/missing_id/the_path', res) self.assertEqual(1, mount_shares.call_count) self.assertEqual(1, cluster_update.call_count) def test_manila_type_validation_wrong_schema(self): data = { "name": "test_data_data_source", "url": "man://%s" % uuidutils.generate_uuid(), "type": "manila", "description": ("incorrect url schema for") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) def test_manila_type_validation_empty_url(self): data = { "name": "test_data_data_source", "url": "", "type": "manila", "description": ("empty url") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) def test_manila_type_validation_no_uuid(self): data = { "name": "test_data_data_source", "url": "manila://bob", "type": "manila", "description": ("netloc is not a uuid") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) def test_manila_type_validation_no_path(self): data = { "name": "test_data_data_source", "url": "manila://%s" % uuidutils.generate_uuid(), "type": "manila", "description": ("netloc is not a uuid") } with testtools.ExpectedException(ex.InvalidDataException): self.manila_type.validate(data) def test_manila_type_validation_correct(self): data = { "name": "test_data_data_source", "url": "manila://%s/foo" % uuidutils.generate_uuid(), "type": "manila", "description": ("correct url") } self.manila_type.validate(data) sahara-12.0.0/sahara/tests/unit/service/edp/test_hdfs_helper.py0000664000175000017500000001704013656752032024552 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import exceptions as ex from sahara.service.edp import hdfs_helper as helper from sahara.tests.unit import base class HDFSHelperTestCase(base.SaharaTestCase): def setUp(self): super(HDFSHelperTestCase, self).setUp() self.cluster = mock.MagicMock() self.cluster.id = '1axx' def test_create_hbase_common_lib_no_ex(self): def _command(a): if a == 'hbase classpath': return [0, 'april:may.jar:june'] self.cluster.execute_command.side_effect = _command helper.create_hbase_common_lib(self.cluster) calls = [ mock.call(('sudo su - -c "hdfs dfs -mkdir -p ' '/user/sahara-hbase-lib" hdfs')), mock.call('hbase classpath'), mock.call(('sudo su - -c "hdfs dfs -put -p may.jar ' '/user/sahara-hbase-lib" hdfs'))] self.cluster.execute_command.assert_has_calls(calls) def test_create_hbase_common_lib_ex(self): def _command(a): if a == 'hbase classpath': return [1, 'april:may.jar:june'] self.cluster.execute_command.side_effect = _command self.assertRaises(ex.RequiredServiceMissingException, helper.create_hbase_common_lib, self.cluster) def test_copy_from_local(self): helper.copy_from_local(self.cluster, 'Galaxy', 'Earth', 'BigBang') self.cluster.execute_command.assert_called_once_with( 'sudo su - -c "hdfs dfs -copyFromLocal Galaxy Earth" BigBang') def test_move_from_local(self): helper.move_from_local(self.cluster, 'Galaxy', 'Earth', 'BigBang') self.cluster.execute_command.assert_called_once_with( 'sudo su - -c "hdfs dfs -copyFromLocal Galaxy Earth" BigBang ' '&& sudo rm -f Galaxy') def test_create_dir_hadoop1(self): helper.create_dir_hadoop1(self.cluster, 'Earth', 'BigBang') self.cluster.execute_command.assert_called_once_with( 'sudo su - -c "hdfs dfs -mkdir Earth" BigBang') def test_create_dir_hadoop2(self): helper.create_dir_hadoop2(self.cluster, 'Earth', 'BigBang') self.cluster.execute_command.assert_called_once_with( 'sudo su - -c "hdfs dfs -mkdir -p Earth" BigBang') @mock.patch('sahara.utils.cluster.generate_etc_hosts') @mock.patch('sahara.plugins.utils.get_instances') @mock.patch('sahara.conductor.api.LocalApi.cluster_get_all') def test_get_cluster_hosts_information_smthg_wrong(self, mock_get_all, mock_get_inst, mock_generate): res = helper._get_cluster_hosts_information('host', self.cluster) self.assertIsNone(res) @mock.patch('sahara.context.ctx') @mock.patch('sahara.utils.cluster.generate_etc_hosts') @mock.patch('sahara.plugins.utils.get_instances') @mock.patch('sahara.conductor.api.LocalApi.cluster_get_all') def test_get_cluster_hosts_information_c_id(self, mock_get_all, mock_get_inst, mock_generate, mock_ctx): cluster = mock.MagicMock() cluster.id = '1axx' instance = mock.MagicMock() instance.instance_name = 'host' mock_get_all.return_value = [cluster] mock_get_inst.return_value = [instance] res = helper._get_cluster_hosts_information('host', self.cluster) self.assertIsNone(res) @mock.patch('sahara.context.ctx') @mock.patch('sahara.utils.cluster.generate_etc_hosts') @mock.patch('sahara.plugins.utils.get_instances') @mock.patch('sahara.conductor.api.LocalApi.cluster_get_all') def test_get_cluster_hosts_information_i_name(self, mock_get_all, mock_get_inst, mock_generate, mock_ctx): cluster = mock.MagicMock() cluster.id = '1axz' instance = mock.MagicMock() instance.instance_name = 'host' mock_get_all.return_value = [cluster] mock_get_inst.return_value = [instance] res = helper._get_cluster_hosts_information('host', self.cluster) self.assertEqual(res, mock_generate()) @mock.patch('sahara.service.edp.hdfs_helper._is_cluster_configured') @mock.patch('six.text_type') @mock.patch('sahara.plugins.utils.get_instances') @mock.patch(('sahara.service.edp.hdfs_helper._get_cluster_hosts_' 'information')) def test_configure_cluster_for_hdfs(self, mock_helper, mock_get, mock_six, cluster_conf): cluster_conf.return_value = False inst = mock.MagicMock() inst.remote = mock.MagicMock() mock_six.return_value = 111 str1 = '/tmp/etc-hosts-update.111' str2 = ('cat /tmp/etc-hosts-update.111 /etc/hosts | sort | uniq > ' '/tmp/etc-hosts.111 && cat /tmp/etc-hosts.111 > ' '/etc/hosts && rm -f /tmp/etc-hosts.111 ' '/tmp/etc-hosts-update.111') mock_get.return_value = [inst] helper.configure_cluster_for_hdfs(self.cluster, "www.host.ru") inst.remote.assert_has_calls( [mock.call(), mock.call().__enter__(), mock.call().__enter__().write_file_to(str1, mock_helper()), mock.call().__enter__().execute_command(str2, run_as_root=True), mock.call().__exit__(None, None, None)]) @mock.patch('sahara.plugins.utils.get_instances') def test_is_cluster_configured(self, mock_get): inst = mock.Mock() r = mock.MagicMock() inst.remote = mock.Mock(return_value=r) enter_r = mock.Mock() enter_r.execute_command = mock.Mock() enter_r.execute_command.return_value = 0, "127.0.0.1 localhost\n" + \ "127.0.0.2 t1 t1" r.__enter__.return_value = enter_r cmd = 'cat /etc/hosts' host_info = ['127.0.0.1 localhost', '127.0.0.2 t1 t1'] mock_get.return_value = [inst] res = helper._is_cluster_configured(self.cluster, host_info) self.assertTrue(res) enter_r.execute_command.assert_called_with(cmd) enter_r.execute_command.return_value = 0, "127.0.0.1 localhost\n" res = helper._is_cluster_configured(self.cluster, host_info) self.assertFalse(res) enter_r.execute_command.assert_called_with(cmd) @mock.patch('six.text_type') @mock.patch('os.open') def test_put_file_to_hdfs(self, open_get, mock_six): open_get.return_value = '/tmp/workflow.xml' mock_six.return_value = 111 helper.put_file_to_hdfs(self.cluster, open_get, 'workflow', '/tmp', 'hdfs') self.cluster.execute_command.assert_called_once_with( 'sudo su - -c "hdfs dfs -copyFromLocal /tmp/workflow.111' ' /tmp/workflow" hdfs && sudo rm -f /tmp/workflow.111') sahara-12.0.0/sahara/tests/unit/service/edp/test_job_utils.py0000664000175000017500000002400613656752032024261 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from oslo_utils import uuidutils import testtools from sahara import conductor as cond from sahara.service.edp.data_sources import manager as ds_manager from sahara.service.edp import job_utils from sahara.tests.unit.service.edp import edp_test_utils as u conductor = cond.API class JobUtilsTestCase(testtools.TestCase): def setUp(self): super(JobUtilsTestCase, self).setUp() ds_manager.setup_data_sources() def test_args_may_contain_data_sources(self): job_configs = None # No configs, default false by_name, by_uuid = job_utils.may_contain_data_source_refs(job_configs) self.assertFalse(by_name | by_uuid) # Empty configs, default false job_configs = {'configs': {}} by_name, by_uuid = job_utils.may_contain_data_source_refs(job_configs) self.assertFalse(by_name | by_uuid) job_configs['configs'] = {job_utils.DATA_SOURCE_SUBST_NAME: True, job_utils.DATA_SOURCE_SUBST_UUID: True} by_name, by_uuid = job_utils.may_contain_data_source_refs(job_configs) self.assertTrue(by_name & by_uuid) job_configs['configs'][job_utils.DATA_SOURCE_SUBST_NAME] = False by_name, by_uuid = job_utils.may_contain_data_source_refs(job_configs) self.assertFalse(by_name) self.assertTrue(by_uuid) job_configs['configs'][job_utils.DATA_SOURCE_SUBST_UUID] = False by_name, by_uuid = job_utils.may_contain_data_source_refs(job_configs) self.assertFalse(by_name | by_uuid) job_configs['configs'] = {job_utils.DATA_SOURCE_SUBST_NAME: 'True', job_utils.DATA_SOURCE_SUBST_UUID: 'Fish'} by_name, by_uuid = job_utils.may_contain_data_source_refs(job_configs) self.assertTrue(by_name) self.assertFalse(by_uuid) def test_find_possible_data_source_refs_by_name(self): id = uuidutils.generate_uuid() job_configs = {} self.assertEqual([], job_utils.find_possible_data_source_refs_by_name( job_configs)) name_ref = job_utils.DATA_SOURCE_PREFIX+'name' name_ref2 = name_ref+'2' job_configs = {'args': ['first', id], 'configs': {'config': 'value'}, 'params': {'param': 'value'}} self.assertEqual([], job_utils.find_possible_data_source_refs_by_name( job_configs)) job_configs = {'args': [name_ref, id], 'configs': {'config': 'value'}, 'params': {'param': 'value'}} self.assertEqual( ['name'], job_utils.find_possible_data_source_refs_by_name(job_configs)) job_configs = {'args': ['first', id], 'configs': {'config': name_ref}, 'params': {'param': 'value'}} self.assertEqual( ['name'], job_utils.find_possible_data_source_refs_by_name(job_configs)) job_configs = {'args': ['first', id], 'configs': {'config': 'value'}, 'params': {'param': name_ref}} self.assertEqual( ['name'], job_utils.find_possible_data_source_refs_by_name(job_configs)) job_configs = {'args': [name_ref, name_ref2, id], 'configs': {'config': name_ref}, 'params': {'param': name_ref}} self.assertItemsEqual( ['name', 'name2'], job_utils.find_possible_data_source_refs_by_name(job_configs)) def test_find_possible_data_source_refs_by_uuid(self): job_configs = {} name_ref = job_utils.DATA_SOURCE_PREFIX+'name' self.assertEqual([], job_utils.find_possible_data_source_refs_by_uuid( job_configs)) id = uuidutils.generate_uuid() job_configs = {'args': ['first', name_ref], 'configs': {'config': 'value'}, 'params': {'param': 'value'}} self.assertEqual([], job_utils.find_possible_data_source_refs_by_uuid( job_configs)) job_configs = {'args': [id, name_ref], 'configs': {'config': 'value'}, 'params': {'param': 'value'}} self.assertEqual( [id], job_utils.find_possible_data_source_refs_by_uuid(job_configs)) job_configs = {'args': ['first', name_ref], 'configs': {'config': id}, 'params': {'param': 'value'}} self.assertEqual( [id], job_utils.find_possible_data_source_refs_by_uuid(job_configs)) job_configs = {'args': ['first', name_ref], 'configs': {'config': 'value'}, 'params': {'param': id}} self.assertEqual( [id], job_utils.find_possible_data_source_refs_by_uuid(job_configs)) id2 = uuidutils.generate_uuid() job_configs = {'args': [id, id2, name_ref], 'configs': {'config': id}, 'params': {'param': id}} self.assertItemsEqual([id, id2], job_utils.find_possible_data_source_refs_by_uuid( job_configs)) @mock.patch('sahara.context.ctx') @mock.patch('sahara.conductor.API.data_source_get_all') def test_resolve_data_source_refs(self, data_source_get_all, ctx): ctx.return_value = 'dummy' name_ref = job_utils.DATA_SOURCE_PREFIX+'input' job_exec_id = uuidutils.generate_uuid() input_url = "swift://container/input" input = u.create_data_source(input_url, name="input", id=uuidutils.generate_uuid()) output = u.create_data_source("swift://container/output.%JOB_EXEC_ID%", name="output", id=uuidutils.generate_uuid()) output_url = "swift://container/output." + job_exec_id by_name = {'input': input, 'output': output} by_id = {input.id: input, output.id: output} # Pretend to be the database def _get_all(ctx, **kwargs): name = kwargs.get('name') if name in by_name: name_list = [by_name[name]] else: name_list = [] id = kwargs.get('id') if id in by_id: id_list = [by_id[id]] else: id_list = [] return list(set(name_list + id_list)) data_source_get_all.side_effect = _get_all job_configs = { 'configs': { job_utils.DATA_SOURCE_SUBST_NAME: True, job_utils.DATA_SOURCE_SUBST_UUID: True}, 'args': [name_ref, output.id, input.id]} urls = {} ds, nc = job_utils.resolve_data_source_references(job_configs, job_exec_id, urls) self.assertEqual(2, len(ds)) self.assertEqual([input.url, output_url, input.url], nc['args']) # Substitution not enabled job_configs['configs'] = {job_utils.DATA_SOURCE_SUBST_NAME: False, job_utils.DATA_SOURCE_SUBST_UUID: False} ds, nc = job_utils.resolve_data_source_references(job_configs, job_exec_id, {}) self.assertEqual(0, len(ds)) self.assertEqual(job_configs['args'], nc['args']) self.assertEqual(job_configs['configs'], nc['configs']) # Substitution enabled but no values to modify job_configs['configs'] = {job_utils.DATA_SOURCE_SUBST_NAME: True, job_utils.DATA_SOURCE_SUBST_UUID: True} job_configs['args'] = ['val1', 'val2', 'val3'] ds, nc = job_utils.resolve_data_source_references(job_configs, job_exec_id, {}) self.assertEqual(0, len(ds)) self.assertEqual(nc['args'], job_configs['args']) self.assertEqual(nc['configs'], job_configs['configs']) def test_to_url_dict(self): data_source_urls = {'1': ('1_native', '1_runtime'), '2': ('2_native', '2_runtime')} self.assertItemsEqual({'1': '1_native', '2': '2_native'}, job_utils.to_url_dict(data_source_urls)) self.assertItemsEqual({'1': '1_runtime', '2': '2_runtime'}, job_utils.to_url_dict(data_source_urls, runtime=True)) @mock.patch('sahara.service.edp.hdfs_helper.configure_cluster_for_hdfs') def test_prepare_cluster_for_ds(self, configure): data_source_urls = {'1': '1_runtime', '2': '2_runtime'} data_source = mock.Mock() data_source.type = 'hdfs' data_source.id = '1' cluster = mock.Mock() job_configs = mock.Mock() job_utils.prepare_cluster_for_ds([data_source], cluster, job_configs, data_source_urls) configure.assert_called_once() configure.assert_called_with(cluster, '1_runtime') sahara-12.0.0/sahara/tests/unit/service/test_networks.py0000664000175000017500000001176213656752032023400 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.service import networks from sahara.tests.unit import base class TestNetworks(base.SaharaTestCase): @mock.patch('sahara.service.networks.conductor.instance_update') @mock.patch('sahara.utils.openstack.nova.get_instance_info') def test_init_instances_ips_with_floating(self, nova, upd): server = mock.Mock() server.addresses = { 'network': [ { 'version': 4, 'OS-EXT-IPS:type': 'fixed', 'addr': '10.2.2.2' }, { 'version': 4, 'OS-EXT-IPS:type': 'floating', 'addr': '172.1.1.1' } ] } nova.return_value = server self.assertEqual('172.1.1.1', networks.init_instances_ips(mock.Mock())) @mock.patch('sahara.service.networks.conductor.instance_update') @mock.patch('sahara.utils.openstack.nova.get_instance_info') def test_init_instances_ips_without_floating(self, nova, upd): self.override_config('use_floating_ips', False) server = mock.Mock() server.addresses = { 'network': [ { 'version': 4, 'OS-EXT-IPS:type': 'fixed', 'addr': '10.2.2.2' } ] } nova.return_value = server self.assertEqual('10.2.2.2', networks.init_instances_ips(mock.Mock())) @mock.patch('sahara.service.networks.conductor.instance_update') @mock.patch('sahara.utils.openstack.nova.get_instance_info') def test_init_instances_ips_with_proxy(self, nova, upd): instance = mock.Mock() instance.cluster.has_proxy_gateway.return_value = True instance.node_group.is_proxy_gateway = False server = mock.Mock() server.addresses = { 'network': [ { 'version': 4, 'OS-EXT-IPS:type': 'fixed', 'addr': '10.2.2.2' } ] } nova.return_value = server self.assertEqual('10.2.2.2', networks.init_instances_ips(instance)) @mock.patch('sahara.service.networks.conductor.instance_update') @mock.patch('sahara.utils.openstack.nova.get_instance_info') def test_init_instances_ips_neutron_with_floating( self, nova, upd): server = mock.Mock(id='serv_id') server.addresses = { 'network': [ { 'version': 4, 'OS-EXT-IPS:type': 'floating', 'addr': '172.1.1.1' }, { 'version': 4, 'OS-EXT-IPS:type': 'fixed', 'addr': '10.2.2.2' } ] } nova.return_value = server self.assertEqual('172.1.1.1', networks.init_instances_ips(mock.Mock())) @mock.patch('sahara.service.networks.conductor.instance_update') @mock.patch('sahara.utils.openstack.nova.get_instance_info') def test_init_instances_ips_neutron_without_floating( self, nova, upd): self.override_config('use_floating_ips', False) server = mock.Mock(id='serv_id') server.addresses = { 'network': [ { 'version': 4, 'OS-EXT-IPS:type': 'fixed', 'addr': '10.2.2.2' } ] } nova.return_value = server self.assertEqual('10.2.2.2', networks.init_instances_ips(mock.Mock())) @mock.patch('sahara.service.networks.conductor.instance_update') @mock.patch('sahara.utils.openstack.nova.get_instance_info') def test_init_instances_ips_with_ipv6_subnet(self, nova, upd): self.override_config('use_floating_ips', False) instance = mock.Mock() server = mock.Mock() server.addresses = { 'network': [ { 'version': 6, 'OS-EXT-IPS:type': 'fixed', 'addr': 'fe80::1234:5678:9abc:def0' }, { 'version': 4, 'OS-EXT-IPS:type': 'fixed', 'addr': '10.2.2.2' } ] } nova.return_value = server self.assertEqual('10.2.2.2', networks.init_instances_ips(instance)) sahara-12.0.0/sahara/tests/unit/service/test_ops.py0000664000175000017500000001757613656752032022336 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.plugins import base as base_plugins from sahara.service import ops from sahara.tests.unit import base from sahara.utils import cluster as c_u class FakeCluster(object): id = 'id' status = "Some_status" name = "Fake_cluster" class FakeNodeGroup(object): id = 'id' count = 2 instances = [{'instance_name': 'id-10', 'id': 2}, {'instance_name': 'id-2', 'id': 1}] class FakePlugin(mock.Mock): node_groups = [FakeNodeGroup()] is_transient = False def update_infra(self, cluster): TestOPS.SEQUENCE.append('update_infra') def configure_cluster(self, cluster): TestOPS.SEQUENCE.append('configure_cluster') def start_cluster(self, cluster): TestOPS.SEQUENCE.append('start_cluster') def on_terminate_cluster(self, cluster): TestOPS.SEQUENCE.append('on_terminate_cluster') def decommission_nodes(self, cluster, instances_to_delete): TestOPS.SEQUENCE.append('decommission_nodes') def scale_cluster(self, cluster, node_group_id_map, node_group_instance_map=None): TestOPS.SEQUENCE.append('plugin.scale_cluster') def cluster_destroy(self, ctx, cluster): TestOPS.SEQUENCE.append('cluster_destroy') class FakeINFRA(object): def create_cluster(self, cluster): TestOPS.SEQUENCE.append('create_cluster') def scale_cluster(self, cluster, node_group_id_map, node_group_instance_map=None): TestOPS.SEQUENCE.append('INFRA.scale_cluster') return True def shutdown_cluster(self, cluster, force): TestOPS.SEQUENCE.append('shutdown_cluster') def rollback_cluster(self, cluster, reason): TestOPS.SEQUENCE.append('rollback_cluster') class TestOPS(base.SaharaWithDbTestCase): SEQUENCE = [] @mock.patch('sahara.service.ops._refresh_health_for_cluster') @mock.patch('sahara.utils.cluster.change_cluster_status_description', return_value=FakeCluster()) @mock.patch('sahara.service.ops._update_sahara_info') @mock.patch('sahara.service.ops._prepare_provisioning', return_value=(mock.Mock(), mock.Mock(), FakePlugin())) @mock.patch('sahara.utils.cluster.change_cluster_status') @mock.patch('sahara.conductor.API.cluster_get') @mock.patch('sahara.service.ops.CONF') @mock.patch('sahara.service.trusts.create_trust_for_cluster') @mock.patch('sahara.conductor.API.job_execution_get_all') @mock.patch('sahara.service.edp.job_manager.run_job') def test_provision_cluster(self, p_run_job, p_job_exec, p_create_trust, p_conf, p_cluster_get, p_change_status, p_prep_provisioning, p_update_sahara_info, p_change_cluster_status_desc, refresh): del self.SEQUENCE[:] ops.INFRA = FakeINFRA() ops._provision_cluster('123') # checking that order of calls is right self.assertEqual(['update_infra', 'create_cluster', 'configure_cluster', 'start_cluster'], self.SEQUENCE, 'Order of calls is wrong') self.assertEqual(1, refresh.call_count) @mock.patch('sahara.service.ops._refresh_health_for_cluster') @mock.patch('sahara.service.ntp_service.configure_ntp') @mock.patch('sahara.service.ops.CONF') @mock.patch('sahara.service.ops._prepare_provisioning', return_value=(mock.Mock(), mock.Mock(), FakePlugin())) @mock.patch('sahara.utils.cluster.change_cluster_status', return_value=FakePlugin()) @mock.patch('sahara.utils.cluster.get_instances') def test_provision_scaled_cluster(self, p_get_instances, p_change_status, p_prep_provisioning, p_conf, p_ntp, refresh): del self.SEQUENCE[:] ops.INFRA = FakeINFRA() p_conf.use_identity_api_v3 = True ops._provision_scaled_cluster('123', {'id': 1}) # checking that order of calls is right self.assertEqual(['decommission_nodes', 'INFRA.scale_cluster', 'plugin.scale_cluster'], self.SEQUENCE, 'Order of calls is wrong') self.assertEqual(1, refresh.call_count) @mock.patch('sahara.service.ops._setup_trust_for_cluster') @mock.patch('sahara.service.ops.CONF') @mock.patch('sahara.service.trusts.delete_trust_from_cluster') @mock.patch('sahara.context.ctx') def test_terminate_cluster(self, p_ctx, p_delete_trust, p_conf, p_set): del self.SEQUENCE[:] base_plugins.PLUGINS = FakePlugin() base_plugins.PLUGINS.get_plugin.return_value = FakePlugin() ops.INFRA = FakeINFRA() ops.conductor = FakePlugin() ops.terminate_cluster('123') # checking that order of calls is right self.assertEqual(['on_terminate_cluster', 'shutdown_cluster', 'cluster_destroy'], self.SEQUENCE, 'Order of calls is wrong') @mock.patch('sahara.utils.cluster.change_cluster_status_description') @mock.patch('sahara.service.ops._prepare_provisioning') @mock.patch('sahara.utils.cluster.change_cluster_status') @mock.patch('sahara.service.ops._rollback_cluster') @mock.patch('sahara.conductor.API.cluster_get') def test_ops_error_hadler_success_rollback( self, p_cluster_get, p_rollback_cluster, p_change_cluster_status, p__prepare_provisioning, p_change_cluster_status_desc): # Test scenario: failed scaling -> success_rollback fake_cluster = FakeCluster() p_change_cluster_status_desc.return_value = FakeCluster() p_rollback_cluster.return_value = True p_cluster_get.return_value = fake_cluster p__prepare_provisioning.side_effect = ValueError('error1') expected = [ mock.call(fake_cluster, c_u.CLUSTER_STATUS_ACTIVE, 'Scaling cluster failed for the following ' 'reason(s): error1') ] ops._provision_scaled_cluster(fake_cluster.id, {'id': 1}) self.assertEqual(expected, p_change_cluster_status.call_args_list) @mock.patch('sahara.utils.cluster.change_cluster_status_description') @mock.patch('sahara.service.ops._prepare_provisioning') @mock.patch('sahara.utils.cluster.change_cluster_status') @mock.patch('sahara.service.ops._rollback_cluster') @mock.patch('sahara.conductor.API.cluster_get') def test_ops_error_hadler_failed_rollback( self, p_cluster_get, p_rollback_cluster, p_change_cluster_status, p__prepare_provisioning, p_change_cluster_status_desc): # Test scenario: failed scaling -> failed_rollback fake_cluster = FakeCluster() p_change_cluster_status_desc.return_value = FakeCluster() p__prepare_provisioning.side_effect = ValueError('error1') p_rollback_cluster.side_effect = ValueError('error2') p_cluster_get.return_value = fake_cluster expected = [ mock.call( fake_cluster, 'Error', 'Scaling cluster failed for the ' 'following reason(s): error1, error2') ] ops._provision_scaled_cluster(fake_cluster.id, {'id': 1}) self.assertEqual(expected, p_change_cluster_status.call_args_list) sahara-12.0.0/sahara/tests/unit/service/heat/0000775000175000017500000000000013656752227021033 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/heat/__init__.py0000664000175000017500000000000013656752032023124 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/tests/unit/service/heat/test_templates.py0000664000175000017500000003257113656752032024444 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from sahara.conductor import resource as r from sahara.service.heat import templates as h from sahara.tests.unit import base from sahara.tests.unit import testutils as tu class BaseTestClusterTemplate(base.SaharaWithDbTestCase): """Checks valid structure of Resources section in generated Heat templates. 1. It checks templates generation with OpenStack network installation: Neutron. 2. Cinder volume attachments. 3. Basic instances creations with multi line user data provided. 4. Anti-affinity feature with proper nova scheduler hints included into Heat templates. """ def _make_node_groups(self, floating_ip_pool=None, volume_type=None): ng1 = tu.make_ng_dict('master', 42, ['namenode'], 1, floating_ip_pool=floating_ip_pool, image_id=None, volumes_per_node=0, volumes_size=0, id="1", image_username='root', volume_type=None, boot_from_volume=False, auto_security_group=True) ng2 = tu.make_ng_dict('worker', 42, ['datanode'], 1, floating_ip_pool=floating_ip_pool, image_id=None, volumes_per_node=2, volumes_size=10, id="2", image_username='root', volume_type=volume_type, boot_from_volume=False, auto_security_group=True) return ng1, ng2 def _make_cluster(self, mng_network, ng1, ng2, anti_affinity=None, domain_name=None): return tu.create_cluster("cluster", "tenant1", "general", "2.6.0", [ng1, ng2], user_keypair_id='user_key', neutron_management_network=mng_network, default_image_id='1', image_id=None, anti_affinity=anti_affinity or [], domain_name=domain_name, anti_affinity_ratio=1) class TestClusterTemplate(BaseTestClusterTemplate): def _make_heat_template(self, cluster, ng1, ng2): heat_template = h.ClusterStack(cluster) heat_template.add_node_group_extra(ng1['id'], 1, get_ud_generator('line1\nline2')) heat_template.add_node_group_extra(ng2['id'], 1, get_ud_generator('line2\nline3')) return heat_template def test_get_anti_affinity_scheduler_hints(self): ng1, ng2 = self._make_node_groups('floating') cluster = self._make_cluster('private_net', ng1, ng2, anti_affinity=["datanode"]) heat_template = self._make_heat_template(cluster, ng1, ng2) ng1 = [ng for ng in cluster.node_groups if ng.name == "master"][0] ng2 = [ng for ng in cluster.node_groups if ng.name == "worker"][0] expected = { "scheduler_hints": { "group": { "get_param": [h.SERVER_GROUP_NAMES, {"get_param": "instance_index"}] } } } actual = heat_template._get_anti_affinity_scheduler_hints(ng2) self.assertEqual(expected, actual) expected = {} actual = heat_template._get_anti_affinity_scheduler_hints(ng1) self.assertEqual(expected, actual) def test_get_security_groups(self): ng1, ng2 = self._make_node_groups('floating') ng1['security_groups'] = ['1', '2'] ng1['auto_security_group'] = False ng2['security_groups'] = ['3', '4'] ng2['auto_security_group'] = True cluster = self._make_cluster('private_net', ng1, ng2) heat_template = self._make_heat_template(cluster, ng1, ng2) ng1 = [ng for ng in cluster.node_groups if ng.name == "master"][0] ng2 = [ng for ng in cluster.node_groups if ng.name == "worker"][0] expected = ['1', '2'] actual = heat_template._get_security_groups(ng1) self.assertEqual(expected, actual) expected = ['3', '4', {'get_param': 'autosecgroup'}] actual = heat_template._get_security_groups(ng2) self.assertEqual(expected, actual) def test_get_security_groups_empty(self): ng1, _ = self._make_node_groups() ng1['security_groups'] = None ng1['auto_security_group'] = False cluster = self._make_cluster('private_net', ng1, ng1) heat_template = self._make_heat_template(cluster, ng1, ng1) ng1 = [ng for ng in cluster.node_groups if ng.name == "master"][0] actual = heat_template._get_security_groups(ng1) self.assertEqual([], actual) def _generate_auto_security_group_template(self): ng1, ng2 = self._make_node_groups('floating') cluster = self._make_cluster('private_net', ng1, ng2) ng1['cluster'] = cluster ng2['cluster'] = cluster ng1 = r.NodeGroupResource(ng1) ng2 = r.NodeGroupResource(ng2) heat_template = self._make_heat_template(cluster, ng1, ng2) return heat_template._serialize_auto_security_group(ng1) @mock.patch('sahara.utils.openstack.neutron.get_private_network_cidrs') def test_serialize_auto_security_group_neutron(self, patched): ipv4_cidr = '192.168.0.0/24' ipv6_cidr = 'fe80::/64' patched.side_effect = lambda cluster: [ipv4_cidr, ipv6_cidr] expected_rules = [ ('0.0.0.0/0', 'IPv4', 'tcp', '22', '22'), ('::/0', 'IPv6', 'tcp', '22', '22'), (ipv4_cidr, 'IPv4', 'tcp', '1', '65535'), (ipv4_cidr, 'IPv4', 'udp', '1', '65535'), (ipv4_cidr, 'IPv4', 'icmp', '0', '255'), (ipv6_cidr, 'IPv6', 'tcp', '1', '65535'), (ipv6_cidr, 'IPv6', 'udp', '1', '65535'), (ipv6_cidr, 'IPv6', 'icmp', '0', '255'), ] expected = {'cluster-master-1': { 'type': 'OS::Neutron::SecurityGroup', 'properties': { 'description': 'Data Processing Cluster by Sahara\n' 'Sahara cluster name: cluster\n' 'Sahara engine: heat.3.0\n' 'Auto security group for Sahara Node ' 'Group: master', 'rules': [{ 'remote_ip_prefix': rule[0], 'ethertype': rule[1], 'protocol': rule[2], 'port_range_min': rule[3], 'port_range_max': rule[4] } for rule in expected_rules] } }} actual = self._generate_auto_security_group_template() self.assertEqual(expected, actual) @mock.patch("sahara.conductor.objects.Cluster.use_designate_feature") def test_serialize_designate_records(self, mock_use_designate): ng1, ng2 = self._make_node_groups('floating') cluster = self._make_cluster('private_net', ng1, ng2, domain_name='domain.org.') mock_use_designate.return_value = False heat_template = self._make_heat_template(cluster, ng1, ng2) expected = {} actual = heat_template._serialize_designate_records() self.assertEqual(expected, actual) mock_use_designate.return_value = True heat_template = self._make_heat_template(cluster, ng1, ng2) expected = { 'internal_designate_record': { 'properties': { 'domain': 'domain.org.', 'name': { 'list_join': [ '.', [{'get_attr': ['inst', 'name']}, 'domain.org.']] }, 'data': {'get_attr': ['inst', 'networks', 'private', 0]}, 'type': 'A' }, 'type': 'OS::Designate::Record' }, 'external_designate_record': { 'properties': { 'domain': 'domain.org.', 'name': { 'list_join': [ '.', [{'get_attr': ['inst', 'name']}, 'domain.org.']] }, 'data': {'get_attr': ['floating_ip', 'ip']}, 'type': 'A' }, 'type': 'OS::Designate::Record' } } actual = heat_template._serialize_designate_records() self.assertEqual(expected, actual) @mock.patch("sahara.conductor.objects.Cluster.use_designate_feature") def test_serialize_designate_reversed_records(self, mock_use_designate): def _generate_reversed_ip(ip): return { 'list_join': [ '.', [ {'str_split': ['.', ip, 3]}, {'str_split': ['.', ip, 2]}, {'str_split': ['.', ip, 1]}, {'str_split': ['.', ip, 0]}, 'in-addr.arpa.' ] ] } ng1, ng2 = self._make_node_groups('floating') cluster = self._make_cluster('private_net', ng1, ng2, domain_name='domain.org.') mock_use_designate.return_value = False heat_template = self._make_heat_template(cluster, ng1, ng2) expected = {} actual = heat_template._serialize_designate_reverse_records() self.assertEqual(expected, actual) mock_use_designate.return_value = True heat_template = self._make_heat_template(cluster, ng1, ng2) expected = { 'internal_designate_reverse_record': { 'properties': { 'domain': 'in-addr.arpa.', 'name': _generate_reversed_ip( {'get_attr': ['inst', 'networks', 'private', 0]}), 'data': { 'list_join': [ '.', [{'get_attr': ['inst', 'name']}, 'domain.org.']] }, 'type': 'PTR' }, 'type': 'OS::Designate::Record' }, 'external_designate_reverse_record': { 'properties': { 'domain': 'in-addr.arpa.', 'name': _generate_reversed_ip( {'get_attr': ['floating_ip', 'ip']}), 'data': { 'list_join': [ '.', [{'get_attr': ['inst', 'name']}, 'domain.org.']] }, 'type': 'PTR' }, 'type': 'OS::Designate::Record' } } actual = heat_template._serialize_designate_reverse_records() self.assertEqual(expected, actual) class TestClusterTemplateWaitCondition(BaseTestClusterTemplate): def _make_heat_template(self, cluster, ng1, ng2): heat_template = h.ClusterStack(cluster) heat_template.add_node_group_extra(ng1.id, 1, get_ud_generator('line1\nline2')) heat_template.add_node_group_extra(ng2.id, 1, get_ud_generator('line2\nline3')) return heat_template def setUp(self): super(TestClusterTemplateWaitCondition, self).setUp() _ng1, _ng2 = self._make_node_groups("floating") _cluster = self._make_cluster("private_net", _ng1, _ng2) _ng1["cluster"] = _ng2["cluster"] = _cluster self.ng1 = mock.Mock() self.ng1.configure_mock(**_ng1) self.ng2 = mock.Mock() self.ng2.configure_mock(**_ng2) self.cluster = mock.Mock() self.cluster.configure_mock(**_cluster) self.template = self._make_heat_template(self.cluster, self.ng1, self.ng2) @mock.patch('sahara.utils.cluster.etc_hosts_entry_for_service') def test_use_wait_condition(self, etc_hosts): etc_hosts.return_value = "data" self.override_config('heat_enable_wait_condition', True) instance = self.template._serialize_instance(self.ng1) expected_wc_handle = { "type": "OS::Heat::WaitConditionHandle" } expected_wc_waiter = { "type": "OS::Heat::WaitCondition", "depends_on": "inst", "properties": { "timeout": 3600, "handle": {"get_resource": "master-wc-handle"} } } self.assertEqual(expected_wc_handle, instance["master-wc-handle"]) self.assertEqual(expected_wc_waiter, instance["master-wc-waiter"]) def get_ud_generator(s): def generator(*args, **kwargs): return s return generator sahara-12.0.0/sahara/tests/README.rst0000664000175000017500000000034013656752032017151 0ustar zuulzuul00000000000000===================== Sahara Testing Infra ===================== This README file attempts to provide current and prospective contributors with everything they need to know in order to start creating unit tests for Sahara. sahara-12.0.0/sahara/plugins/0000775000175000017500000000000013656752227016012 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/plugins/utils.py0000664000175000017500000001645013656752032017524 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools from oslo_utils import netutils from six.moves.urllib import parse as urlparse from sahara.i18n import _ from sahara.plugins import base as plugins_base from sahara.plugins import exceptions as ex from sahara.utils import api_validator from sahara.utils import cluster as cluster_utils from sahara.utils import cluster_progress_ops as ops from sahara.utils import configs as sahara_configs from sahara.utils import crypto from sahara.utils import files from sahara.utils import general from sahara.utils.openstack import nova from sahara.utils import poll_utils from sahara.utils import proxy from sahara.utils import remote from sahara.utils import rpc from sahara.utils import types from sahara.utils import xmlutils event_wrapper = ops.event_wrapper def get_node_groups(cluster, node_process=None, **kwargs): return [ng for ng in cluster.node_groups if (node_process is None or node_process in ng.node_processes)] def get_instances_count(cluster, node_process=None, **kwargs): return sum([ng.count for ng in get_node_groups(cluster, node_process)]) def get_instances(cluster, node_process=None, **kwargs): nodes = get_node_groups(cluster, node_process) return list(itertools.chain(*[node.instances for node in nodes])) def get_instance(cluster, node_process, **kwargs): instances = get_instances(cluster, node_process) if len(instances) > 1: raise ex.InvalidComponentCountException( node_process, _('0 or 1'), len(instances)) return instances[0] if instances else None def generate_host_names(nodes, **kwargs): return "\n".join([n.hostname() for n in nodes]) def generate_fqdn_host_names(nodes, **kwargs): return "\n".join([n.fqdn() for n in nodes]) def get_port_from_address(address, **kwargs): parse_result = urlparse.urlparse(address) # urlparse do not parse values like 0.0.0.0:8000, # netutils do not parse values like http://localhost:8000, # so combine approach is using if parse_result.port: return parse_result.port else: return netutils.parse_host_port(address)[1] def instances_with_services(instances, node_processes, **kwargs): node_processes = set(node_processes) return list(filter( lambda x: node_processes.intersection( x.node_group.node_processes), instances)) def start_process_event_message(process, **kwargs): return _("Start the following process(es): {process}").format( process=process) def get_config_value_or_default( service=None, name=None, cluster=None, config=None, **kwargs): if not config: if not service or not name: raise RuntimeError(_("Unable to retrieve config details")) default_value = None else: service = config.applicable_target name = config.name default_value = config.default_value cluster_configs = cluster.cluster_configs if cluster_configs.get(service, {}).get(name, None) is not None: return cluster_configs.get(service, {}).get(name, None) # Try getting config from the cluster. for ng in cluster.node_groups: if (ng.configuration().get(service) and ng.configuration()[service].get(name)): return ng.configuration()[service][name] # Find and return the default if default_value is not None: return default_value plugin = plugins_base.PLUGINS.get_plugin(cluster.plugin_name) configs = plugin.get_all_configs(cluster.hadoop_version) for config in configs: if config.applicable_target == service and config.name == name: return config.default_value raise RuntimeError(_("Unable to get parameter '%(param_name)s' from " "service %(service)s"), {'param_name': name, 'service': service}) def cluster_get_instances(cluster, instances_ids=None, **kwargs): return cluster_utils.get_instances(cluster, instances_ids) def check_cluster_exists(cluster, **kwargs): return cluster_utils.check_cluster_exists(cluster) def add_provisioning_step(cluster_id, step_name, total, **kwargs): return ops.add_provisioning_step(cluster_id, step_name, total) def add_successful_event(instance, **kwargs): ops.add_successful_event(instance) def add_fail_event(instance, exception, **kwargs): ops.add_fail_event(instance, exception) def merge_configs(config_a, config_b, **kwargs): return sahara_configs.merge_configs(config_a, config_b) def generate_key_pair(key_length=2048, **kwargs): return crypto.generate_key_pair(key_length) def get_file_text(file_name, package='sahara', **kwargs): return files.get_file_text(file_name, package) def try_get_file_text(file_name, package='sahara', **kwargs): return files.try_get_file_text(file_name, package) def get_by_id(lst, id, **kwargs): return general.get_by_id(lst, id) def natural_sort_key(s, **kwargs): return general.natural_sort_key(s) def get_flavor(**kwargs): return nova.get_flavor(**kwargs) def poll(get_status, kwargs=None, args=None, operation_name=None, timeout_name=None, timeout=poll_utils.DEFAULT_TIMEOUT, sleep=poll_utils.DEFAULT_SLEEP_TIME, exception_strategy='raise'): poll_utils.poll(get_status, kwargs=kwargs, args=args, operation_name=operation_name, timeout_name=timeout_name, timeout=timeout, sleep=sleep, exception_strategy=exception_strategy) def plugin_option_poll(cluster, get_status, option, operation_name, sleep_time, kwargs): poll_utils.plugin_option_poll(cluster, get_status, option, operation_name, sleep_time, kwargs) def create_proxy_user_for_cluster(cluster, **kwargs): return proxy.create_proxy_user_for_cluster(cluster) def get_remote(instance, **kwargs): return remote.get_remote(instance) def rpc_setup(service_name, **kwargs): rpc.setup(service_name) def transform_to_num(s, **kwargs): return types.transform_to_num(s) def is_int(s, **kwargs): return types.is_int(s) def parse_hadoop_xml_with_name_and_value(data, **kwargs): return xmlutils.parse_hadoop_xml_with_name_and_value(data) def create_hadoop_xml(configs, config_filter=None, **kwargs): return xmlutils.create_hadoop_xml(configs, config_filter) def create_elements_xml(configs, **kwargs): return xmlutils.create_elements_xml(configs) def load_hadoop_xml_defaults(file_name, package, **kwargs): return xmlutils.load_hadoop_xml_defaults(file_name, package) def get_property_dict(elem, **kwargs): return xmlutils.get_property_dict(elem) class PluginsApiValidator(api_validator.ApiValidator): def __init__(self, schema, **kwargs): super(PluginsApiValidator, self).__init__(schema) sahara-12.0.0/sahara/plugins/objects.py0000664000175000017500000000127613656752032020015 0ustar zuulzuul00000000000000# Copyright (c) 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.conductor import objects def is_object_instance(target): return isinstance(target, objects.Instance) sahara-12.0.0/sahara/plugins/__init__.py0000664000175000017500000000000013656752032020103 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/plugins/castellan_utils.py0000664000175000017500000000163713656752032021553 0ustar zuulzuul00000000000000# Copyright (c) 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.service.castellan import utils as castellan_utils def delete_secret(id, ctx=None, **kwargs): castellan_utils.delete_secret(id, ctx=ctx) def get_secret(id, ctx=None, **kwargs): return castellan_utils.get_secret(id, ctx=ctx) def store_secret(secret, ctx=None, **kwargs): return castellan_utils.store_secret(secret) sahara-12.0.0/sahara/plugins/fake/0000775000175000017500000000000013656752227016720 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/plugins/fake/__init__.py0000664000175000017500000000000013656752032021011 0ustar zuulzuul00000000000000sahara-12.0.0/sahara/plugins/fake/edp_engine.py0000664000175000017500000000346113656752032021365 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.service.edp import base_engine from sahara.service.validations.edp import job_execution as j from sahara.utils import edp class FakeJobEngine(base_engine.JobEngine): def cancel_job(self, job_execution): pass def get_job_status(self, job_execution): pass def run_job(self, job_execution): return 'engine_job_id', edp.JOB_STATUS_SUCCEEDED, None def run_scheduled_job(self, job_execution): pass def validate_job_execution(self, cluster, job, data): if job.type == edp.JOB_TYPE_SHELL: return # All other types except Java require input and output # objects and Java require main class if job.type in [edp.JOB_TYPE_JAVA, edp.JOB_TYPE_SPARK]: j.check_main_class_present(data, job) else: j.check_data_sources(data, job) job_type, subtype = edp.split_job_type(job.type) if job_type == edp.JOB_TYPE_MAPREDUCE and ( subtype == edp.JOB_SUBTYPE_STREAMING): j.check_streaming_present(data, job) @staticmethod def get_possible_job_config(job_type): return None @staticmethod def get_supported_job_types(): return edp.JOB_TYPES_ALL sahara-12.0.0/sahara/plugins/fake/plugin.py0000664000175000017500000001125013656752032020561 0ustar zuulzuul00000000000000# Copyright (c) 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara import context from sahara.i18n import _ from sahara.plugins import exceptions as pex from sahara.plugins.fake import edp_engine from sahara.plugins import kerberos as krb from sahara.plugins import provisioning as p from sahara.plugins import utils as plugin_utils class FakePluginProvider(p.ProvisioningPluginBase): def get_title(self): return "Fake Plugin" def get_description(self): return _("It's a fake plugin that aimed to work on the CirrOS images. " "It doesn't install Hadoop. It's needed to be able to test " "provisioning part of Sahara codebase itself.") def get_versions(self): return ["0.1"] def get_labels(self): return { 'plugin_labels': { 'enabled': {'status': True}, 'hidden': {'status': True}, }, 'version_labels': { '0.1': {'enabled': {'status': True}} } } def get_node_processes(self, hadoop_version): return { "HDFS": ["namenode", "datanode"], "MapReduce": ["tasktracker", "jobtracker"], "Kerberos": [], } def get_configs(self, hadoop_version): # returning kerberos configs return krb.get_config_list() def configure_cluster(self, cluster): with context.ThreadGroup() as tg: for instance in plugin_utils.get_instances(cluster): tg.spawn('fake-write-%s' % instance.id, self._write_ops, instance) def start_cluster(self, cluster): self.deploy_kerberos(cluster) with context.ThreadGroup() as tg: for instance in plugin_utils.get_instances(cluster): tg.spawn('fake-check-%s' % instance.id, self._check_ops, instance) def deploy_kerberos(self, cluster): all_instances = plugin_utils.get_instances(cluster) namenodes = plugin_utils.get_instances(cluster, 'namenode') server = None if len(namenodes) > 0: server = namenodes[0] elif len(all_instances) > 0: server = all_instances[0] if server: krb.deploy_infrastructure(cluster, server) def scale_cluster(self, cluster, instances): with context.ThreadGroup() as tg: for instance in instances: tg.spawn('fake-scaling-%s' % instance.id, self._all_check_ops, instance) def decommission_nodes(self, cluster, instances): pass def _write_ops(self, instance): with instance.remote() as r: # check typical SSH command r.execute_command('echo "Hello, world!"') # check write file data_1 = "sp@m" r.write_file_to('test_data', data_1, run_as_root=True) # check append file data_2 = " and eggs" r.append_to_file('test_data', data_2, run_as_root=True) # check replace string r.replace_remote_string('test_data', "eggs", "pony") def _check_ops(self, instance): expected_data = "sp@m and pony" with instance.remote() as r: actual_data = r.read_file_from('test_data', run_as_root=True) if actual_data.strip() != expected_data.strip(): raise pex.HadoopProvisionError("ACTUAL:\n%s\nEXPECTED:\n%s" % ( actual_data, expected_data)) def _all_check_ops(self, instance): self._write_ops(instance) self._check_ops(instance) def get_edp_engine(self, cluster, job_type): if job_type in edp_engine.FakeJobEngine.get_supported_job_types(): return edp_engine.FakeJobEngine() def get_edp_job_types(self, versions=None): res = {} for vers in self.get_versions(): if not versions or vers in versions: res[vers] = edp_engine.FakeJobEngine.get_supported_job_types() return res def get_edp_config_hints(self, job_type, version): if version in self.get_versions(): return edp_engine.FakeJobEngine.get_possible_job_config(job_type) sahara-12.0.0/sahara/plugins/opts.py0000664000175000017500000000164413656752032017350 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # File contains plugins opts to avoid cyclic imports issue from oslo_config import cfg opts = [ cfg.ListOpt('plugins', default=['vanilla', 'spark', 'cdh', 'ambari', 'storm', 'mapr'], help='List of plugins to be loaded. Sahara preserves the ' 'order of the list when returning it.'), ] CONF = cfg.CONF CONF.register_opts(opts) sahara-12.0.0/sahara/plugins/resource.py0000664000175000017500000000171513656752032020211 0ustar zuulzuul00000000000000# Copyright (c) 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from sahara.conductor import resource def is_resource_instance(target, **kwargs): return isinstance(target, resource.Resource) def create_node_group_resource(data, **kwargs): return resource.NodeGroupResource(data) def create_cluster_resource(data, **kwargs): return resource.ClusterResource(data) def create_resource(data, **kwargs): return resource.Resource(data) sahara-12.0.0/sahara/plugins/images.py0000664000175000017500000012766313656752032017642 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import uuidutils import abc import collections import copy import functools import itertools from os import path import jsonschema import six import yaml from sahara import exceptions as ex from sahara.i18n import _ from sahara.plugins import exceptions as p_ex from sahara.plugins import utils def transform_exception(from_type, to_type, transform_func=None): """Decorator to transform exception types. :param from_type: The type of exception to catch and transform. :param to_type: The type of exception to raise instead. :param transform_func: A function to transform from_type into to_type, which must be of the form func(exc, to_type). Defaults to: lambda exc, new_type: new_type(exc.message) """ if not transform_func: transform_func = lambda exc, new_type: new_type(exc.message) def decorator(func): @functools.wraps(func) def handler(*args, **kwargs): try: func(*args, **kwargs) except from_type as exc: raise transform_func(exc, to_type) return handler return decorator def validate_instance(instance, validators, test_only=False, **kwargs): """Runs all validators against the specified instance. :param instance: An instance to validate. :param validators: A sequence of ImageValidators. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :raises ImageValidationError: If validation fails. """ with instance.remote() as remote: for validator in validators: validator.validate(remote, test_only=test_only, **kwargs) class ImageArgument(object): """An argument used by an image manifest.""" SPEC_SCHEMA = { "type": "object", "items": { "type": "object", "properties": { "target_variable": { "type": "string", "minLength": 1 }, "description": { "type": "string", "minLength": 1 }, "default": { "type": "string", "minLength": 1 }, "required": { "type": "boolean", "minLength": 1 }, "choices": { "type": "array", "minLength": 1, "items": { "type": "string" } } } } } @classmethod def from_spec(cls, spec): """Constructs and returns a set of arguments from a specification. :param spec: The specification for the argument set. :return: A dict of arguments built to the specification. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) arguments = {name: cls(name, arg.get('description'), arg.get('default'), arg.get('required'), arg.get('choices')) for name, arg in six.iteritems(spec)} reserved_names = ['distro', 'test_only'] for name, arg in six.iteritems(arguments): if name in reserved_names: raise p_ex.ImageValidationSpecificationError( _("The following argument names are reserved: " "{names}").format(reserved_names)) if not arg.default and not arg.required: raise p_ex.ImageValidationSpecificationError( _("Argument {name} is not required and must specify a " "default value.").format(name=arg.name)) if arg.choices and arg.default and arg.default not in arg.choices: raise p_ex.ImageValidationSpecificationError( _("Argument {name} specifies a default which is not one " "of its choices.").format(name=arg.name)) return arguments def __init__(self, name, description=None, default=None, required=False, choices=None): self.name = name self.description = description self.default = default self.required = required self.choices = choices @six.add_metaclass(abc.ABCMeta) class ImageValidator(object): """Validates the image spawned to an instance via a set of rules.""" @abc.abstractmethod def validate(self, remote, test_only=False, **kwargs): """Validates the image. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :raises ImageValidationError: If validation fails. """ pass @six.add_metaclass(abc.ABCMeta) class SaharaImageValidatorBase(ImageValidator): """Base class for Sahara's native image validation.""" DISTRO_KEY = 'distro' TEST_ONLY_KEY = 'test_only' ORDERED_VALIDATORS_SCHEMA = { "type": "array", "items": { "type": "object", "minProperties": 1, "maxProperties": 1 } } _DISTRO_FAMILES = { 'centos': 'redhat', 'centos7': 'redhat', 'fedora': 'redhat', 'redhat': 'redhat', 'rhel': 'redhat', 'redhatenterpriseserver': 'redhat', 'ubuntu': 'debian' } @staticmethod def get_validator_map(custom_validator_map=None): """Gets the map of validator name token to validator class. :param custom_validator_map: A map of validator names and classes to add to the ones Sahara provides by default. These will take precedence over the base validators in case of key overlap. :return: A map of validator names and classes. """ default_validator_map = { 'package': SaharaPackageValidator, 'script': SaharaScriptValidator, 'copy_script': SaharaCopyScriptValidator, 'any': SaharaAnyValidator, 'all': SaharaAllValidator, 'os_case': SaharaOSCaseValidator, 'argument_case': SaharaArgumentCaseValidator, 'argument_set': SaharaArgumentSetterValidator, } if custom_validator_map: default_validator_map.update(custom_validator_map) return default_validator_map @classmethod def from_yaml(cls, yaml_path, validator_map=None, resource_roots=None, package='sahara'): """Constructs and returns a validator from the provided yaml file. :param yaml_path: The relative path to a yaml file. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A SaharaImageValidator built to the yaml specification. """ validator_map = validator_map or {} resource_roots = resource_roots or [] file_text = utils.get_file_text(yaml_path, package) spec = yaml.safe_load(file_text) validator_map = cls.get_validator_map(validator_map) return cls.from_spec(spec, validator_map, resource_roots, package) @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Constructs and returns a validator from a specification object. :param spec: The specification for the validator. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A validator built to the specification. """ pass @classmethod def from_spec_list(cls, specs, validator_map, resource_roots, package='sahara'): """Constructs a list of validators from a list of specifications. :param specs: A list of validator specifications, each of which will be a dict of size 1, where the key represents the validator type and the value respresents its specification. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A list of validators. """ validators = [] for spec in specs: validator_class, validator_spec = cls.get_class_from_spec( spec, validator_map) validators.append(validator_class.from_spec( validator_spec, validator_map, resource_roots, package)) return validators @classmethod def get_class_from_spec(cls, spec, validator_map): """Gets the class and specification from a validator dict. :param spec: A validator specification including its type: a dict of size 1, where the key represents the validator type and the value respresents its configuration. :param validator_map: A map of validator name to class. :return: A tuple of validator class and configuration. """ key, value = list(six.iteritems(spec))[0] validator_class = validator_map.get(key, None) if not validator_class: raise p_ex.ImageValidationSpecificationError( _("Validator type %s not found.") % validator_class) return validator_class, value class ValidationAttemptFailed(object): """An object representing a failed validation attempt. Primarily for use by the SaharaAnyValidator, which must aggregate failures for error exposition purposes. """ def __init__(self, exception): self.exception = exception def __bool__(self): return False def __nonzero__(self): return False def try_validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate, but returns rather than raising on failure. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. :return: True if successful, ValidationAttemptFailed object if failed. """ try: self.validate( remote, test_only=test_only, image_arguments=image_arguments, **kwargs) return True except p_ex.ImageValidationError as exc: return self.ValidationAttemptFailed(exc) class SaharaImageValidator(SaharaImageValidatorBase): """The root of any tree of SaharaImageValidators. This validator serves as the root of the tree for SaharaImageValidators, and provides any needed initialization (such as distro retrieval.) """ SPEC_SCHEMA = { "title": "SaharaImageValidator", "type": "object", "properties": { "validators": SaharaImageValidatorBase.ORDERED_VALIDATORS_SCHEMA }, "required": ["validators"] } def get_argument_list(self): return [argument for name, argument in six.iteritems(self.arguments)] @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Constructs and returns a validator from a specification object. :param spec: The specification for the validator: a dict containing the key "validators", which contains a list of validator specifications. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A SaharaImageValidator containing all specified validators. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) arguments_spec = spec.get('arguments', {}) arguments = ImageArgument.from_spec(arguments_spec) validators_spec = spec['validators'] validator = SaharaAllValidator.from_spec( validators_spec, validator_map, resource_roots, package) return cls(validator, arguments) def __init__(self, validator, arguments): """Constructor method. :param validator: A SaharaAllValidator containing the specified validators. """ self.validator = validator self.validators = validator.validators self.arguments = arguments @transform_exception(ex.RemoteCommandException, p_ex.ImageValidationError) def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate the image. Before deferring to contained validators, performs one-time setup steps such as distro discovery. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. :raises ImageValidationError: If validation fails. """ argument_values = {} for name, argument in six.iteritems(self.arguments): if name not in image_arguments: if argument.required: raise p_ex.ImageValidationError( _("Argument {name} is required for image " "processing.").format(name=name)) else: argument_values[name] = argument.default else: value = image_arguments[name] choices = argument.choices if choices and value not in choices: raise p_ex.ImageValidationError( _("Value for argument {name} must be one of " "{choices}.").format(name=name, choices=choices)) else: argument_values[name] = value argument_values[self.DISTRO_KEY] = remote.get_os_distrib() self.validator.validate(remote, test_only=test_only, image_arguments=argument_values) class SaharaPackageValidator(SaharaImageValidatorBase): """A validator that checks package installation state on the instance.""" class Package(object): def __init__(self, name, version=None): self.name = name self.version = version def __str__(self): return ("%s-%s" % (self.name, self.version) if self.version else self.name) _SINGLE_PACKAGE_SCHEMA = { "oneOf": [ { "type": "object", "minProperties": 1, "maxProperties": 1, "additionalProperties": { "type": "object", "properties": { "version": { "type": "string", "minLength": 1 }, } }, }, { "type": "string", "minLength": 1 } ] } SPEC_SCHEMA = { "title": "SaharaPackageValidator", "oneOf": [ _SINGLE_PACKAGE_SCHEMA, { "type": "array", "items": _SINGLE_PACKAGE_SCHEMA, "minLength": 1 } ] } @classmethod def _package_from_spec(cls, spec): """Builds a single package object from a specification. :param spec: May be a string or single-length dictionary of name to configuration values. :return: A package object. """ if isinstance(spec, six.string_types): return cls.Package(spec, None) else: package, properties = list(six.iteritems(spec))[0] version = properties.get('version', None) return cls.Package(package, version) @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Builds a package validator from a specification. :param spec: May be a string, a single-length dictionary of name to configuration values, or a list containing any number of either or both of the above. Configuration values may include: version: The version of the package to check and/or install. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A validator that will check that the specified package or packages are installed. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) packages = ([cls._package_from_spec(package_spec) for package_spec in spec] if isinstance(spec, list) else [cls._package_from_spec(spec)]) return cls(packages) def __init__(self, packages): self.packages = packages @transform_exception(ex.RemoteCommandException, p_ex.ImageValidationError) def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate package installation on the image. Even if test_only=False, attempts to verify previous package installation offline before using networked tools to validate or install new packages. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. :raises ImageValidationError: If validation fails. """ env_distro = image_arguments[self.DISTRO_KEY] env_family = self._DISTRO_FAMILES[env_distro] check, install = self._DISTRO_TOOLS[env_family] if not env_family: raise p_ex.ImageValidationError( _("Unknown distro: cannot verify or install packages.")) try: check(self, remote) except (ex.SubprocessException, ex.RemoteCommandException, RuntimeError): if not test_only: install(self, remote) check(self, remote) else: raise def _dpkg_check(self, remote): check_cmd = ("dpkg -s %s" % " ".join(str(package) for package in self.packages)) return _sudo(remote, check_cmd) def _rpm_check(self, remote): check_cmd = ("rpm -q %s" % " ".join(str(package) for package in self.packages)) return _sudo(remote, check_cmd) def _yum_install(self, remote): install_cmd = ( "yum install -y %s" % " ".join(str(package) for package in self.packages)) _sudo(remote, install_cmd) def _apt_install(self, remote): install_cmd = ( "DEBIAN_FRONTEND=noninteractive apt-get -y install %s" % " ".join(str(package) for package in self.packages)) return _sudo(remote, install_cmd) _DISTRO_TOOLS = { "redhat": (_rpm_check, _yum_install), "debian": (_dpkg_check, _apt_install) } class SaharaScriptValidator(SaharaImageValidatorBase): """A validator that runs a script on the instance.""" _DEFAULT_ENV_VARS = [SaharaImageValidatorBase.TEST_ONLY_KEY, SaharaImageValidatorBase.DISTRO_KEY] SPEC_SCHEMA = { "title": "SaharaScriptValidator", "oneOf": [ { "type": "object", "minProperties": 1, "maxProperties": 1, "additionalProperties": { "type": "object", "properties": { "env_vars": { "type": "array", "items": { "type": "string" } }, "output": { "type": "string", "minLength": 1 }, "inline": { "type": "string", "minLength": 1 } }, } }, { "type": "string" } ] } @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Builds a script validator from a specification. :param spec: May be a string or a single-length dictionary of name to configuration values. Configuration values include: env_vars: A list of environment variable names to send to the script. output: A key into which to put the stdout of the script in the image_arguments of the validation run. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A validator that will run a script on the image. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) script_contents = None if isinstance(spec, six.string_types): script_path = spec env_vars, output_var = cls._DEFAULT_ENV_VARS, None else: script_path, properties = list(six.iteritems(spec))[0] env_vars = cls._DEFAULT_ENV_VARS + properties.get('env_vars', []) output_var = properties.get('output', None) script_contents = properties.get('inline') if not script_contents: for root in resource_roots: file_path = path.join(root, script_path) script_contents = utils.try_get_file_text(file_path, package) if script_contents: break if not script_contents: raise p_ex.ImageValidationSpecificationError( _("Script %s not found in any resource roots.") % script_path) return SaharaScriptValidator(script_contents, env_vars, output_var) def __init__(self, script_contents, env_vars=None, output_var=None): """Constructor method. :param script_contents: A string representation of the script. :param env_vars: A list of environment variables to send to the script. :param output_var: A key into which to put the stdout of the script in the image_arguments of the validation run. :return: A SaharaScriptValidator. """ self.script_contents = script_contents self.env_vars = env_vars or [] self.output_var = output_var @transform_exception(ex.RemoteCommandException, p_ex.ImageValidationError) def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate by running a script on the image. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. Note that the key SIV_TEST_ONLY will be set to 1 if the script should test_only and 0 otherwise; all scripts should act on this input if possible. The key SIV_DISTRO will also contain the distro representation, per `lsb_release -is`. :raises ImageValidationError: If validation fails. """ arguments = copy.deepcopy(image_arguments) arguments[self.TEST_ONLY_KEY] = 1 if test_only else 0 script = "\n".join(["%(env_vars)s", "%(script)s"]) env_vars = "\n".join("export %s=%s" % (key, value) for (key, value) in six.iteritems(arguments) if key in self.env_vars) script = script % {"env_vars": env_vars, "script": self.script_contents.decode('utf-8')} path = '/tmp/%s.sh' % uuidutils.generate_uuid() remote.write_file_to(path, script, run_as_root=True) _sudo(remote, 'chmod +x %s' % path) code, stdout = _sudo(remote, path) if self.output_var: image_arguments[self.output_var] = stdout class SaharaCopyScriptValidator(SaharaImageValidatorBase): """A validator that copy a script to the instance.""" SPEC_SCHEMA = { "title": "SaharaCopyScriptValidator", "oneOf": [ { "type": "object", "minProperties": 1, "maxProperties": 1, "additionalProperties": { "type": "object", "properties": { "output": { "type": "string", "minLength": 1 }, "inline": { "type": "string", "minLength": 1 } }, } }, { "type": "string" } ] } @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Builds a copy script validator from a specification. :param spec: May be a string or a single-length dictionary of name to configuration values. Configuration values include: env_vars: A list of environment variable names to send to the script. output: A key into which to put the stdout of the script in the image_arguments of the validation run. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A validator that will copy a script to the image. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) script_contents = None if isinstance(spec, six.string_types): script_path = spec output_var = None else: script_path, properties = list(six.iteritems(spec))[0] output_var = properties.get('output', None) script_contents = properties.get('inline') if not script_contents: for root in resource_roots: file_path = path.join(root, script_path) script_contents = utils.try_get_file_text(file_path, package) if script_contents: break script_name = script_path.split('/')[2] if not script_contents: raise p_ex.ImageValidationSpecificationError( _("Script %s not found in any resource roots.") % script_path) return SaharaCopyScriptValidator(script_contents, script_name, output_var) def __init__(self, script_contents, script_name, output_var=None): """Constructor method. :param script_contents: A string representation of the script. :param output_var: A key into which to put the stdout of the script in the image_arguments of the validation run. :return: A SaharaScriptValidator. """ self.script_contents = script_contents self.script_name = script_name self.output_var = output_var @transform_exception(ex.RemoteCommandException, p_ex.ImageValidationError) def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate by running a script on the image. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. Note that the key SIV_TEST_ONLY will be set to 1 if the script should test_only and 0 otherwise; all scripts should act on this input if possible. The key SIV_DISTRO will also contain the distro representation, per `lsb_release -is`. :raises ImageValidationError: If validation fails. """ arguments = copy.deepcopy(image_arguments) arguments[self.TEST_ONLY_KEY] = 1 if test_only else 0 script = "\n".join(["%(script)s"]) script = script % {"script": self.script_contents} path = '/tmp/%s' % self.script_name remote.write_file_to(path, script, run_as_root=True) @six.add_metaclass(abc.ABCMeta) class SaharaAggregateValidator(SaharaImageValidatorBase): """An abstract class representing an ordered list of other validators.""" SPEC_SCHEMA = SaharaImageValidator.ORDERED_VALIDATORS_SCHEMA @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Builds the aggregate validator from a specification. :param spec: A list of validator definitions, each of which is a single-length dictionary of name to configuration values. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: An aggregate validator. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) validators = cls.from_spec_list(spec, validator_map, resource_roots, package) return cls(validators) def __init__(self, validators): self.validators = validators class SaharaAnyValidator(SaharaAggregateValidator): """A list of validators, only one of which must succeed.""" def _try_all(self, remote, test_only=False, image_arguments=None, **kwargs): results = [] for validator in self.validators: result = validator.try_validate(remote, test_only=test_only, image_arguments=image_arguments, **kwargs) results.append(result) if result: break return results def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate any of the contained validators. Note that if test_only=False, this validator will first run all contained validators using test_only=True, and succeed immediately should any pass validation. If all fail, it will only then run them using test_only=False, and again succeed immediately should any pass. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. :raises ImageValidationError: If validation fails. """ results = self._try_all(remote, test_only=True, image_arguments=image_arguments) if not test_only and not any(results): results = self._try_all(remote, test_only=False, image_arguments=image_arguments) if not any(results): raise p_ex.AllValidationsFailedError(result.exception for result in results) class SaharaAllValidator(SaharaAggregateValidator): """A list of validators, all of which must succeed.""" def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate all of the contained validators. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. :raises ImageValidationError: If validation fails. """ for validator in self.validators: validator.validate(remote, test_only=test_only, image_arguments=image_arguments) class SaharaOSCaseValidator(SaharaImageValidatorBase): """A validator which will take different actions depending on distro.""" _distro_tuple = collections.namedtuple('Distro', ['distro', 'validator']) SPEC_SCHEMA = { "type": "array", "minLength": 1, "items": { "type": "object", "minProperties": 1, "maxProperties": 1, "additionalProperties": SaharaImageValidator.ORDERED_VALIDATORS_SCHEMA, } } @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Builds an os_case validator from a specification. :param spec: A list of single-length dictionaries. The key of each is a distro or family name and the value under each key is a list of validators (all of which must succeed.) :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A SaharaOSCaseValidator. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) distros = itertools.chain(*(six.iteritems(distro_spec) for distro_spec in spec)) distros = [ cls._distro_tuple(key, SaharaAllValidator.from_spec( value, validator_map, resource_roots, package)) for (key, value) in distros] return cls(distros) def __init__(self, distros): """Constructor method. :param distros: A list of distro tuples (distro, list of validators). """ self.distros = distros def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate depending on distro. May match the OS by specific distro or by family (centos may match "centos" or "redhat", for instance.) If multiple keys match the distro, only the validators under the first matched key will be run. If no keys match, no validators are run, and validation proceeds. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. :raises ImageValidationError: If validation fails. """ env_distro = image_arguments[self.DISTRO_KEY] family = self._DISTRO_FAMILES.get(env_distro) matches = {env_distro, family} if family else {env_distro} for distro, validator in self.distros: if distro in matches: validator.validate( remote, test_only=test_only, image_arguments=image_arguments) break class SaharaArgumentCaseValidator(SaharaImageValidatorBase): """A validator which will take different actions depending on distro.""" SPEC_SCHEMA = { "type": "object", "properties": { "argument_name": { "type": "string", "minLength": 1 }, "cases": { "type": "object", "minProperties": 1, "additionalProperties": SaharaImageValidator.ORDERED_VALIDATORS_SCHEMA, }, }, "additionalProperties": False, "required": ["argument_name", "cases"] } @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Builds an argument_case validator from a specification. :param spec: A dictionary with two items: "argument_name", containing a string indicating the argument to be checked, and "cases", a dictionary. The key of each item in the dictionary is a value which may or may not match the argument value, and the value is a list of validators to be run in case it does. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A SaharaArgumentCaseValidator. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) argument_name = spec['argument_name'] cases = {key: SaharaAllValidator.from_spec( value, validator_map, resource_roots, package) for key, value in six.iteritems(spec['cases'])} return cls(argument_name, cases) def __init__(self, argument_name, cases): """Constructor method. :param argument_name: The name of an argument. :param cases: A dictionary of possible argument value to a sub-validator to run in case of a match. """ self.argument_name = argument_name self.cases = cases def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate depending on argument value. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. :raises ImageValidationError: If validation fails. """ arg = self.argument_name if arg not in image_arguments: raise p_ex.ImageValidationError( _("Argument {name} not found.").format(name=arg)) value = image_arguments[arg] if value in self.cases: self.cases[value].validate( remote, test_only=test_only, image_arguments=image_arguments) class SaharaArgumentSetterValidator(SaharaImageValidatorBase): """A validator which sets a specific argument to a specific value.""" SPEC_SCHEMA = { "type": "object", "properties": { "argument_name": { "type": "string", "minLength": 1 }, "value": { "type": "string", "minLength": 1 }, }, "additionalProperties": False, "required": ["argument_name", "value"] } @classmethod def from_spec(cls, spec, validator_map, resource_roots, package='sahara'): """Builds an argument_set validator from a specification. :param spec: A dictionary with two items: "argument_name", containing a string indicating the argument to be set, and "value", a value to which to set that argument. :param validator_map: A map of validator name to class. :param resource_roots: The roots from which relative paths to resources (scripts and such) will be referenced. Any resource will be pulled from the first path in the list at which a file exists. :return: A SaharaArgumentSetterValidator. """ jsonschema.validate(spec, cls.SPEC_SCHEMA) argument_name = spec['argument_name'] value = spec['value'] return cls(argument_name, value) def __init__(self, argument_name, value): """Constructor method. :param argument_name: The name of an argument. :param value: A value to which to set that argument. """ self.argument_name = argument_name self.value = value def validate(self, remote, test_only=False, image_arguments=None, **kwargs): """Attempts to validate depending on argument value. :param remote: A remote socket to the instance. :param test_only: If true, all validators will only verify that a desired state is present, and fail if it is not. If false, all validators will attempt to enforce the desired state if possible, and succeed if this enforcement succeeds. :param image_arguments: A dictionary of image argument values keyed by argument name. """ image_arguments[self.argument_name] = self.value def _sudo(remote, cmd, **kwargs): return remote.execute_command(cmd, run_as_root=True, **kwargs) sahara-12.0.0/sahara/plugins/resources/0000775000175000017500000000000013656752227020024 5ustar zuulzuul00000000000000sahara-12.0.0/sahara/plugins/resources/krb-client-init.sh.template0000664000175000017500000000102413656752032025154 0ustar zuulzuul00000000000000#!/bin/bash set -xe export SAHARA_SCRIPT_BASE_OS=%(os)s if [ "$SAHARA_SCRIPT_BASE_OS" = "ubuntu" ]; then sudo dpkg -s krb5-user || sudo DEBIAN_FRONTEND=noninteractive apt-get install -y krb5-user sudo dpkg -s libpam-krb5 || sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libpam-krb5 sudo dpkg -s ldap-utils || sudo DEBIAN_FRONTEND=noninteractive apt-get install -y ldap-utils else sudo rpm -q krb5-workstation || sudo yum install -y krb5-workstation fi sudo echo "%(krb5_conf)s" | sudo tee /etc/krb5.conf sahara-12.0.0/sahara/plugins/resources/mit-kdc-server-init.sh.template0000664000175000017500000000161313656752032025762 0ustar zuulzuul00000000000000#!/bin/bash set -xe export SAHARA_SCRIPT_BASE_OS=%(os)s if [ "$SAHARA_SCRIPT_BASE_OS" = "ubuntu" ]; then sudo dpkg -s krb5-admin-server || sudo DEBIAN_FRONTEND=noninteractive apt-get install -y krb5-admin-server sudo dpkg -s rng-tools || sudo apt-get install rng-tools -y else sudo rpm -q krb5-server || sudo yum install -y krb5-server sudo rpm -q krb5-libs || sudo yum install -y krb5-libs sudo rpm -q krb5-workstation || sudo yum install -y krb5-workstation sudo rpm -q rng-tools || sudo yum install -y rng-tools fi sudo rngd -r /dev/urandom -W 4096 sudo echo "%(krb5_conf)s" | sudo tee /etc/krb5.conf sudo echo "%(kdc_conf)s" | sudo tee %(kdc_conf_path)s sudo echo "%(acl_conf)s" | sudo tee %(acl_conf_path)s sudo %(realm_create)s <